You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by havin <22...@qq.com> on 2021/06/16 01:11:05 UTC

native k8s application模式部署flink任务,调用stopWithSavepoint(JobID jobId, boolean advanceToEndOfTime, @Nullable String savepointDirectory)的时候运行报错

*调用代码如下*


/byte[] bytes = StringUtils.hexStringToByte(originalJob.getFlinkJobId());
CompletableFuture<String> completableFuture = new
RestClusterClient(flinkConfiguration,
originalJob.getClusterId()).stopWithSavepoint(new JobID(bytes), true,
DataProcessConfig.getSavepointPath());
String savepointPath = completableFuture.get();/

*client端报错信息如下*


2021-06-15 14:18:40.530 ERROR 1 --- [ent-IO-thread-1]
o.apache.flink.runtime.rest.RestClient   : 
Received response was neither of the expected type ([simple type, class
org.apache.flink.runtime.rest.handler.async.AsynchronousOperationResult<org.apache.flink.runtime.rest.messages.job.savepoints.SavepointInfo>])
 nor an error.
Response=JsonResponse{json={"status":{"id":"COMPLETED"},"operation":{"failure-cause":{"class":"java.util.concurrent.CompletionException","stack-trace":
 "java.util.concurrent.CompletionException: java.io.IOException: Could not
flush and close the file system output stream to
hdfs://192.168.255.227:8020/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata
in order to obtain the stream state handle
at
java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at
java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at
java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:925)
at
java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:913)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:234)
at
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at
org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1044)
at akka.dispatch.OnComplete.internal(Future.scala:263)
at akka.dispatch.OnComplete.internal(Future.scala:261)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at
org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573)
at
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
at
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at
akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
*Caused by: java.io.IOException: Could not flush and close the file system
output stream to
hdfs://192.168.255.227:8020/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata
in order to obtain the stream state handle*
at
org.apache.flink.runtime.state.filesystem.FsCheckpointMetadataOutputStream.closeAndFinalizeCheckpoint(FsCheckpointMetadataOutputStream.java:150)
at
org.apache.flink.runtime.state.filesystem.FsCheckpointMetadataOutputStream.closeAndFinalizeCheckpoint(FsCheckpointMetadataOutputStream.java:40)
at
org.apache.flink.runtime.checkpoint.PendingCheckpoint.finalizeCheckpoint(PendingCheckpoint.java:319)
at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1187)
at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1081)
at
org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$7(SchedulerBase.java:1045)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
*Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata could only be written to 0
of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s)
are excluded in this operation.*
at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy57.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy58.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
","serialized-throwable":""}}}, httpResponseStatus=200 OK}

org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
Unrecognized field "status" (class
org.apache.flink.runtime.rest.messages.ErrorResponseBody), not marked as
ignorable (one known property: "errors"])
 at [Source: UNKNOWN; line: -1, column: -1] (through reference chain:
org.apache.flink.runtime.rest.messages.ErrorResponseBody["status"])
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:840)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1192)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1592)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperties(BeanDeserializerBase.java:1542)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:504)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1287)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4173)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2467)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2920)
~[flink-shaded-jackson-2.10.1-12.0.jar!/:2.10.1-12.0]
	at
org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:389)
[flink-runtime_2.12-1.12.0.jar!/:1.12.0]
	at
org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:374)
[flink-runtime_2.12-1.12.0.jar!/:1.12.0]
	at
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
~[na:1.8.0_282]
	at
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
~[na:1.8.0_282]
	at
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
~[na:1.8.0_282]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[na:1.8.0_282]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[na:1.8.0_282]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_282]



*
jobmanager报错信息如下
*


2021-06-15 14:15:35,673 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering
checkpoint 15 (type=CHECKPOINT) @ 1623766535653 for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:15:35,817 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Completed
checkpoint 15 for job c94a917d20dd51706a27c82674e7fdc8 (4290 bytes in 164
ms).
2021-06-15 14:15:36,654 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering
checkpoint 16 (type=CHECKPOINT) @ 1623766536653 for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:15:36,669 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Completed
checkpoint 16 for job c94a917d20dd51706a27c82674e7fdc8 (4290 bytes in 15
ms).
2021-06-15 14:15:36,791 INFO  org.apache.flink.runtime.jobmaster.JobMaster                
[] - Triggering stop-with-savepoint for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:15:36,796 WARN  org.apache.flink.runtime.util.HadoopUtils                   
[] - Could not find Hadoop configuration via any of the supported methods
(Flink configuration, environment variables).
2021-06-15 14:15:38,327 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering
checkpoint 17 (type=SAVEPOINT_TERMINATE) @ 1623766536795 for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:16:38,695 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - *Exception in createBlockOutputStream
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while
waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending
remote=/10.244.4.54:9866]*
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1702)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1432)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
2021-06-15 14:16:38,696 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Abandoning BP-1203575084-10.244.2.44-1623759092067:blk_1073741831_1007
2021-06-15 14:16:38,709 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Excluding datanode
DatanodeInfoWithStorage[10.244.4.54:9866,DS-63982cf0-6d77-438b-a1ac-b519cdfded02,DISK]
2021-06-15 14:17:38,739 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Exception in createBlockOutputStream
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while
waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending
remote=/10.244.6.178:9866]
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1702)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1432)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
2021-06-15 14:17:38,740 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Abandoning BP-1203575084-10.244.2.44-1623759092067:blk_1073741832_1008
2021-06-15 14:17:38,748 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Excluding datanode
DatanodeInfoWithStorage[10.244.6.178:9866,DS-590b9ce1-2c94-4fa7-8089-7c9a24730384,DISK]
2021-06-15 14:18:38,818 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Exception in createBlockOutputStream
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while
waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending remote=/10.244.2.45:9866]
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1702)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1432)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1385)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
2021-06-15 14:18:38,819 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Abandoning BP-1203575084-10.244.2.44-1623759092067:blk_1073741833_1009
2021-06-15 14:18:38,826 INFO  org.apache.hadoop.hdfs.DFSClient                            
[] - Excluding datanode
DatanodeInfoWithStorage[10.244.2.45:9866,DS-0044b76e-35f8-4340-abda-8b4dfcef446c,DISK]
2021-06-15 14:18:38,835 WARN  org.apache.hadoop.hdfs.DFSClient                            
[] - DataStreamer Exception
org.apache.hadoop.ipc.RemoteException: File
/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata could only be written to 0
of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s)
are excluded in this operation.
	at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
	at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
	at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
	at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at com.sun.proxy.$Proxy57.addBlock(Unknown Source) ~[?:?]
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[?:1.8.0_282]
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[?:1.8.0_282]
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:1.8.0_282]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_282]
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at com.sun.proxy.$Proxy58.addBlock(Unknown Source) ~[?:?]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
2021-06-15 14:18:38,853 WARN  org.apache.flink.runtime.jobmaster.JobMaster                
[] - Error while processing checkpoint acknowledgement message
org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize
the pending checkpoint 17. Failure reason: Failure to finalize checkpoint.
	at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1200)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1081)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$7(SchedulerBase.java:1045)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[?:1.8.0_282]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_282]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
[?:1.8.0_282]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[?:1.8.0_282]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_282]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_282]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]
*Caused by: org.apache.flink.util.SerializedThrowable: Could not flush and
close the file system output stream to
hdfs://192.168.255.227:8020/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata
in order to obtain the stream state handle*
	at
org.apache.flink.runtime.state.filesystem.FsCheckpointMetadataOutputStream.closeAndFinalizeCheckpoint(FsCheckpointMetadataOutputStream.java:150)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.state.filesystem.FsCheckpointMetadataOutputStream.closeAndFinalizeCheckpoint(FsCheckpointMetadataOutputStream.java:40)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.checkpoint.PendingCheckpoint.finalizeCheckpoint(PendingCheckpoint.java:319)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1187)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	... 9 more
*Caused by: org.apache.flink.util.SerializedThrowable: File
/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata could only be written to 0
of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s)
are excluded in this operation.*
	at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
	at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
	at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
	at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at com.sun.proxy.$Proxy57.addBlock(Unknown Source) ~[?:?]
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[?:1.8.0_282]
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[?:1.8.0_282]
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:1.8.0_282]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_282]
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at com.sun.proxy.$Proxy58.addBlock(Unknown Source) ~[?:?]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
2021-06-15 14:18:38,864 INFO  org.apache.flink.runtime.jobmaster.JobMaster                
[] - Trying to recover from a global failure.
org.apache.flink.runtime.checkpoint.CheckpointException: Failure to finalize
checkpoint.
	at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1194)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1081)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$7(SchedulerBase.java:1045)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[?:1.8.0_282]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_282]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
~[?:1.8.0_282]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
~[?:1.8.0_282]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[?:1.8.0_282]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[?:1.8.0_282]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_282]
*Caused by: org.apache.flink.util.SerializedThrowable: Could not flush and
close the file system output stream to
hdfs://192.168.255.227:8020/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata
in order to obtain the stream state handle*
	at
org.apache.flink.runtime.state.filesystem.FsCheckpointMetadataOutputStream.closeAndFinalizeCheckpoint(FsCheckpointMetadataOutputStream.java:150)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.state.filesystem.FsCheckpointMetadataOutputStream.closeAndFinalizeCheckpoint(FsCheckpointMetadataOutputStream.java:40)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.checkpoint.PendingCheckpoint.finalizeCheckpoint(PendingCheckpoint.java:319)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	at
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1187)
~[flink-dist_2.12-1.12.2.jar:1.12.2]
	... 9 more
*Caused by: org.apache.flink.util.SerializedThrowable: File
/flinklmj/savepoint-c94a91-82b6e6c31dd0/_metadata could only be written to 0
of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s)
are excluded in this operation.*
	at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
	at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
	at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
	at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at com.sun.proxy.$Proxy57.addBlock(Unknown Source) ~[?:?]
	at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[?:1.8.0_282]
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[?:1.8.0_282]
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:1.8.0_282]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_282]
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at com.sun.proxy.$Proxy58.addBlock(Unknown Source) ~[?:?]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
	at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
~[flink-shaded-hadoop-2-uber-2.7.5-9.0.jar:2.7.5-9.0]
2021-06-15 14:18:38,866 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job Flink
Streaming Job (c94a917d20dd51706a27c82674e7fdc8) switched from state RUNNING
to RESTARTING.
2021-06-15 14:18:38,910 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(43b658fd1342abb57d12172fe719445f) switched from RUNNING to CANCELING.
2021-06-15 14:18:38,914 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(669d727c6dccfb936c7559ae4f0a39d7) switched from RUNNING to CANCELING.
2021-06-15 14:18:38,915 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(474db2b91bd60c0d8298ab6455624671) switched from RUNNING to CANCELING.
2021-06-15 14:18:38,915 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Filter -> Sink: Unnamed (1/1)
(994732ff9585535eb5fcb8185f42ea71) switched from RUNNING to CANCELING.
2021-06-15 14:18:38,940 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(43b658fd1342abb57d12172fe719445f) switched from CANCELING to CANCELED.
2021-06-15 14:18:38,947 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(669d727c6dccfb936c7559ae4f0a39d7) switched from CANCELING to CANCELED.
2021-06-15 14:18:39,017 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(474db2b91bd60c0d8298ab6455624671) switched from CANCELING to CANCELED.
2021-06-15 14:18:39,033 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Filter -> Sink: Unnamed (1/1)
(994732ff9585535eb5fcb8185f42ea71) switched from CANCELING to CANCELED.
2021-06-15 14:18:39,042 WARN  org.apache.flink.runtime.jobmaster.JobMaster                
[] - Stop-with-savepoint transitioned from FinalState to FinalState on
execution termination handling for job c94a917d20dd51706a27c82674e7fdc8 with
some executions being in an not-finished state: [CANCELED]
2021-06-15 14:18:39,916 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job Flink
Streaming Job (c94a917d20dd51706a27c82674e7fdc8) switched from state
RESTARTING to RUNNING.
2021-06-15 14:18:39,917 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Restoring
job c94a917d20dd51706a27c82674e7fdc8 from Checkpoint 16 @ 1623766536653 for
c94a917d20dd51706a27c82674e7fdc8 located at
<checkpoint-not-externally-addressable>.
2021-06-15 14:18:39,920 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - No master
state to restore
2021-06-15 14:18:39,920 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Filter -> Sink: Unnamed (1/1)
(9f70cbe6269fbd8c87162d728aecf084) switched from CREATED to SCHEDULED.
2021-06-15 14:18:39,924 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Filter -> Sink: Unnamed (1/1)
(9f70cbe6269fbd8c87162d728aecf084) switched from SCHEDULED to DEPLOYING.
2021-06-15 14:18:39,924 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Deploying
Source: Custom Source -> Process -> Filter -> Sink: Unnamed (1/1) (attempt
#1) with attempt id 9f70cbe6269fbd8c87162d728aecf084 to
jobmanager-1623766484380-taskmanager-1-1 @ 10.100.247.62 (dataPort=35321)
with allocation id e4533120e5f008948d4b8a95bf1b5fea
2021-06-15 14:18:39,925 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(cc68c1245dc2b38f76dc221f779f4b75) switched from CREATED to SCHEDULED.
2021-06-15 14:18:39,925 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(cc68c1245dc2b38f76dc221f779f4b75) switched from SCHEDULED to DEPLOYING.
2021-06-15 14:18:39,925 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Deploying
Source: Custom Source -> Process -> Sink: Unnamed (1/1) (attempt #2) with
attempt id cc68c1245dc2b38f76dc221f779f4b75 to
jobmanager-1623766484380-taskmanager-1-1 @ 10.100.247.62 (dataPort=35321)
with allocation id e4533120e5f008948d4b8a95bf1b5fea
2021-06-15 14:18:39,925 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(62071f8dbffab22f4246ff301dfd3523) switched from CREATED to SCHEDULED.
2021-06-15 14:18:39,926 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(62071f8dbffab22f4246ff301dfd3523) switched from SCHEDULED to DEPLOYING.
2021-06-15 14:18:39,926 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Deploying
Source: Custom Source -> Process -> Sink: Unnamed (1/1) (attempt #1) with
attempt id 62071f8dbffab22f4246ff301dfd3523 to
jobmanager-1623766484380-taskmanager-1-1 @ 10.100.247.62 (dataPort=35321)
with allocation id e4533120e5f008948d4b8a95bf1b5fea
2021-06-15 14:18:39,926 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(1b3c07a106d898f21bc3910925e0b3df) switched from CREATED to SCHEDULED.
2021-06-15 14:18:39,927 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(1b3c07a106d898f21bc3910925e0b3df) switched from SCHEDULED to DEPLOYING.
2021-06-15 14:18:39,927 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Deploying
Source: Custom Source -> Process -> Sink: Unnamed (1/1) (attempt #1) with
attempt id 1b3c07a106d898f21bc3910925e0b3df to
jobmanager-1623766484380-taskmanager-1-1 @ 10.100.247.62 (dataPort=35321)
with allocation id e4533120e5f008948d4b8a95bf1b5fea
2021-06-15 14:18:39,944 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(cc68c1245dc2b38f76dc221f779f4b75) switched from DEPLOYING to RUNNING.
2021-06-15 14:18:39,944 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Filter -> Sink: Unnamed (1/1)
(9f70cbe6269fbd8c87162d728aecf084) switched from DEPLOYING to RUNNING.
2021-06-15 14:18:39,946 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(62071f8dbffab22f4246ff301dfd3523) switched from DEPLOYING to RUNNING.
2021-06-15 14:18:39,965 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
Custom Source -> Process -> Sink: Unnamed (1/1)
(1b3c07a106d898f21bc3910925e0b3df) switched from DEPLOYING to RUNNING.
2021-06-15 14:18:40,840 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering
checkpoint 18 (type=CHECKPOINT) @ 1623766720840 for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:18:40,854 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Completed
checkpoint 18 for job c94a917d20dd51706a27c82674e7fdc8 (4290 bytes in 13
ms).
2021-06-15 14:18:41,841 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering
checkpoint 19 (type=CHECKPOINT) @ 1623766721840 for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:18:41,915 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Completed
checkpoint 19 for job c94a917d20dd51706a27c82674e7fdc8 (4290 bytes in 75
ms).
2021-06-15 14:18:42,840 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Triggering
checkpoint 20 (type=CHECKPOINT) @ 1623766722840 for job
c94a917d20dd51706a27c82674e7fdc8.
2021-06-15 14:18:42,857 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Completed
checkpoint 20 for job c94a917d20dd51706a27c82674e7fdc8 (4290 bytes in 16
ms).

如果粘贴信息不全,请提醒我。
望各位指教,这个应该怎么处理,感谢。




--
Sent from: http://apache-flink.147419.n8.nabble.com/