You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "teeguo (Jira)" <ji...@apache.org> on 2022/05/03 05:59:00 UTC

[jira] [Updated] (FLINK-27478) datastream.print() failed by uncertain cause

     [ https://issues.apache.org/jira/browse/FLINK-27478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

teeguo updated FLINK-27478:
---------------------------
    Summary: datastream.print() failed by uncertain cause  (was: failed job by uncertain cause )

> datastream.print() failed by uncertain cause
> --------------------------------------------
>
>                 Key: FLINK-27478
>                 URL: https://issues.apache.org/jira/browse/FLINK-27478
>             Project: Flink
>          Issue Type: Bug
>          Components: API / Python
>    Affects Versions: 1.14.4
>            Reporter: teeguo
>            Priority: Blocker
>
> For the following job:
> {code:java}
> //
> from pyflink.common.serialization import JsonRowDeserializationSchemafrom pyflink.common.typeinfo import Typesfrom pyflink.datastream import StreamExecutionEnvironmentfrom pyflink.datastream.connectors import FlinkKafkaConsumer
> def state_access_demo():
>     env = StreamExecutionEnvironment.get_execution_environment()
>     deserialization_schema = JsonRowDeserializationSchema.builder().type_info(type_info=Types.ROW_NAMED(["r0", "r1", "r2"], [Types.STRING(), Types.STRING(), Types.STRING()])).build()
>      kafka_consumer = FlinkKafkaConsumer(topics='topic',deserialization_schema=deserialization_schema,properties={'bootstrap.servers': 'localhost:9092', 'group.id': 'test-consumer-group'})
>     ds = env.add_source(kafka_consumer)
>     ds.print()
>     env.execute('state_access_demo')
> if __name__ == '__main__':
>      state_access_demo(){code}
> It failed with the following exception which doesn't contain any useful information
> {code:java}
> py4j.protocol.Py4JJavaError: An error occurred while calling o0.execute.
> : org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>         at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>         at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
>         at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
>         at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>         at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>         at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
>         at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>         at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>         at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>         at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>         at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
>         at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
>         at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>         at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
>         at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>         at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>         at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>         at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>         at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
>         at akka.dispatch.OnComplete.internal(Future.scala:300)
>         at akka.dispatch.OnComplete.internal(Future.scala:297)
>         at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
>         at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
>         at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
>         at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
>         at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
>         at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
>         at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
>         at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621)
>         at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24)
>         at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
>         at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
>         at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
>         at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
>         at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
>         at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
>         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
>         at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
>         at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
>         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
>         at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
>         at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
>         at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
>         at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
>         at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
>         at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
>         at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
>         at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
>         at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:252)
>         at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:242)
>         at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:233)
>         at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:684)
>         at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
>         at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:444)
>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>         at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:316)
>         at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
>         at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:314)
>         at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
>         at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
>         at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>         at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
>         at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
>         at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
>         at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
>         at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
>         at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>         at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>         at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
>         at akka.actor.Actor.aroundReceive(Actor.scala:537)
>         at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>         at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:548)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>         ... 5 more
> Caused by: java.lang.RuntimeException: Failed to create stage bundle factory! INFO:root:Initializing Python harness: F:\condaEnv\lib\site-packages\pyflink\fn_execution\beam\beam_boot.py --id=1-1 --provision_endpoint=localhost:60612
> INFO:root:Starting up Python harness in loopback mode.        at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:566)
>         at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:255)
>         at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:131)
>         at org.apache.flink.streaming.api.operators.python.AbstractOneInputPythonFunctionOperator.open(AbstractOneInputPythonFunctionOperator.java:116)
>         at org.apache.flink.streaming.api.operators.python.PythonProcessOperator.open(PythonProcessOperator.java:59)
>         at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
>         at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:711)
>         at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
>         at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:687)
>         at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
>         at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>         at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
>         at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>         at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Process died with exit code 0
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
>         at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:451)
>         at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:436)
>         at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303)
>         at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:564)
>         ... 14 more
> Caused by: java.lang.IllegalStateException: Process died with exit code 0
>         at org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:75)
>         at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:112)
>         at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252)
>         at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
>         at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
>         ... 22 more{code}
>  
> ps: I can sink to print by change schema to
> SimpleStringSchema :
> {code:java}
> from pyflink.common.serialization import JsonRowDeserializationSchemafrom pyflink.common.typeinfo import Typesfrom pyflink.datastream import StreamExecutionEnvironmentfrom pyflink.datastream.connectors import FlinkKafkaConsumer
> def state_access_demo():
>     env = StreamExecutionEnvironment.get_execution_environment()
>     deserialization_schema = SimpleStringSchema()
>     kafka_consumer = FlinkKafkaConsumer(topics='topic',deserialization_schema=deserialization_schema,properties={'bootstrap.servers': 'localhost:9092', 'group.id': 'test-consumer-group'})    ds = env.add_source(kafka_consumer)
>     ds.print()
>     env.execute('state_access_demo')
> if __name__ == '__main__':
>     state_access_demo(){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)