You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Fei Han <ha...@aliyun.com.INVALID> on 2021/06/09 11:01:10 UTC

回复:sql client提交 flink任务失败

我现在设置了如下环境变量:

export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
export PATH=${HADOOP_HOME}/bin:$PATH
export HADOOP_CONF_DIR=/etc/hadoop/conf
export YARN_CONF_DIR=/etc/hadoop/conf
export HBASE_CONF_DIR=/etc/hbase/conf
export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`


作业提交yarn web ui是这样的:


进入Flink  web  ui 报错:


不知道是什么原因,请大家帮忙看下


------------------------------------------------------------------
发件人:Shengkai Fang <fs...@gmail.com>
发送时间:2021年6月9日(星期三) 09:54
收件人:user-zh <us...@flink.apache.org>; Fei Han <ha...@aliyun.com>
主 题:Re: sql client提交 flink任务失败

可以看看之前的问题,看看能否解决。

Best,
Shengkai

[1] http://apache-flink.147419.n8.nabble.com/Flink-td7866.html
[2] https://issues.apache.org/jira/browse/FLINK-20780
Fei Han <ha...@aliyun.com.invalid> 于2021年6月8日周二 下午8:03写道:

 @all:
     Flink环境:Flink1.13.1
     HADOOP环境:CDH5.15.2
 测试命令如下:./bin/sql-client.sh embedded -i  /root/init_iceberg.sql -f /root/hive_catalog.sql
 问题描述:在提交命令后,yarn上面提交成功,Flink1.13.·1 web ui 今天测试出现问题如下:
 2021-06-08 12:02:45
 org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
     at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
     at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
     at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207)
     at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
     at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188)
     at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677)
     at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
     at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
     at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
     at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
     at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
     at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
     at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
     at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
     at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
     at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
     at akka.actor.Actor.aroundReceive(Actor.scala:517)
     at akka.actor.Actor.aroundReceive$(Actor.scala:515)
     at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
     at akka.actor.ActorCell.invoke(ActorCell.scala:561)
     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
     at akka.dispatch.Mailbox.run(Mailbox.scala:225)
     at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
     at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
     at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
     at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
     at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
     at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:4045)
     at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:4013)
     at org.apache.iceberg.hive.HiveClientPool.<init>(HiveClientPool.java:45)
     at org.apache.iceberg.hive.CachedClientPool.lambda$clientPool$0(CachedClientPool.java:58)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2337)
     at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2335)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2318)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:111)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:54)
     at org.apache.iceberg.hive.CachedClientPool.clientPool(CachedClientPool.java:58)
     at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:77)
     at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:181)
     at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:94)
     at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:77)
     at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:93)
     at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.loadTable(TableLoader.java:113)
     at org.apache.iceberg.flink.sink.IcebergFilesCommitter.initializeState(IcebergFilesCommitter.java:125)
     at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:118)
     at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
     at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:436)
     at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
     at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
     at org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
     at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
     at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
     ... 29 more
 q请大家帮忙看下,谢谢啦