You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Wayne <17...@163.com> on 2021/08/28 12:37:08 UTC

flink run -d -m yarn-cluster 提交任务到yarn集群不成功

我的提交命令


./bin/flink run -d -m yarn-cluster 


报错如下 
 The program finished with the following exception:


java.lang.IllegalStateException: No Executor found. Please make sure to export the HADOOP_CLASSPATH environment variable or have hadoop in your classpath. For more information refer to the "Deployment" section of the official Apache Flink documentation.
        at org.apache.flink.yarn.cli.FallbackYarnSessionCli.isActive(FallbackYarnSessionCli.java:41)
        at org.apache.flink.client.cli.CliFrontend.validateAndGetActiveCommandLine(CliFrontend.java:1236)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:234)
        at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
        at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)


运行命令 hadoop classpath
xxxx@192 flink-1.12.2 % hadoop classpath
/Users/xxxx/local/hadoop/hadoop-3.2.2/etc/hadoop:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/*:/Users/xxxx/local/hadoop/hadoop-3.2.2
反复配置HADOOP_CLASSPATH 无法生效 官网给出的  
export HADOOP_CLASSPATH=`hadoop classpath`
这个 hadoop classpath 具体配置到哪一级








Re: flink run -d -m yarn-cluster 提交任务到yarn集群不成功

Posted by Yang Wang <da...@gmail.com>.
export HADOOP_CLASSPATH=`hadoop classpath`

如上方式应该是没有问题的,你确认下这些目录下面的jar包是存在的,尤其是/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/

Best,
Yang

龙逸尘 <ly...@gmail.com> 于2021年8月31日周二 上午11:02写道:

> Hi Wayne,
>
>     可以尝试下指定 HADOOP_CONF_DIR
>     export HADOOP_CONF_DIR=/opt/flink/hadoop-conf/
>
> Wayne <17...@163.com> 于2021年8月28日周六 下午8:37写道:
>
> > 我的提交命令
> >
> >
> > ./bin/flink run -d -m yarn-cluster
> >
> >
> > 报错如下
> >  The program finished with the following exception:
> >
> >
> > java.lang.IllegalStateException: No Executor found. Please make sure to
> > export the HADOOP_CLASSPATH environment variable or have hadoop in your
> > classpath. For more information refer to the "Deployment" section of the
> > official Apache Flink documentation.
> >         at
> >
> org.apache.flink.yarn.cli.FallbackYarnSessionCli.isActive(FallbackYarnSessionCli.java:41)
> >         at
> >
> org.apache.flink.client.cli.CliFrontend.validateAndGetActiveCommandLine(CliFrontend.java:1236)
> >         at
> > org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:234)
> >         at
> >
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
> >         at
> >
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
> >         at
> >
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
> >         at
> > org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
> >
> >
> > 运行命令 hadoop classpath
> > xxxx@192 flink-1.12.2 % hadoop classpath
> >
> >
> /Users/xxxx/local/hadoop/hadoop-3.2.2/etc/hadoop:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/*:/Users/xxxx/local/hadoop/hadoop-3.2.2
> > 反复配置HADOOP_CLASSPATH 无法生效 官网给出的
> > export HADOOP_CLASSPATH=`hadoop classpath`
> > 这个 hadoop classpath 具体配置到哪一级
> >
> >
> >
> >
> >
> >
> >
> >
>

Re: flink run -d -m yarn-cluster 提交任务到yarn集群不成功

Posted by 龙逸尘 <ly...@gmail.com>.
Hi Wayne,

    可以尝试下指定 HADOOP_CONF_DIR
    export HADOOP_CONF_DIR=/opt/flink/hadoop-conf/

Wayne <17...@163.com> 于2021年8月28日周六 下午8:37写道:

> 我的提交命令
>
>
> ./bin/flink run -d -m yarn-cluster
>
>
> 报错如下
>  The program finished with the following exception:
>
>
> java.lang.IllegalStateException: No Executor found. Please make sure to
> export the HADOOP_CLASSPATH environment variable or have hadoop in your
> classpath. For more information refer to the "Deployment" section of the
> official Apache Flink documentation.
>         at
> org.apache.flink.yarn.cli.FallbackYarnSessionCli.isActive(FallbackYarnSessionCli.java:41)
>         at
> org.apache.flink.client.cli.CliFrontend.validateAndGetActiveCommandLine(CliFrontend.java:1236)
>         at
> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:234)
>         at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
>         at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>         at
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>         at
> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
>
>
> 运行命令 hadoop classpath
> xxxx@192 flink-1.12.2 % hadoop classpath
>
> /Users/xxxx/local/hadoop/hadoop-3.2.2/etc/hadoop:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/*:/Users/xxxx/local/hadoop/hadoop-3.2.2
> 反复配置HADOOP_CLASSPATH 无法生效 官网给出的
> export HADOOP_CLASSPATH=`hadoop classpath`
> 这个 hadoop classpath 具体配置到哪一级
>
>
>
>
>
>
>
>

Re: flink run -d -m yarn-cluster 提交任务到yarn集群不成功

Posted by Caizhi Weng <ts...@gmail.com>.
Hi!

不太明白“配置到哪一级”是什么含义。export 命令是将变量导出到目前的 shell session,在从 shell logout
之前都有效。如果运行了 export 命令之后再运行 flink run 应该就可行了。

Wayne <17...@163.com> 于2021年8月28日周六 下午8:37写道:

> 我的提交命令
>
>
> ./bin/flink run -d -m yarn-cluster
>
>
> 报错如下
>  The program finished with the following exception:
>
>
> java.lang.IllegalStateException: No Executor found. Please make sure to
> export the HADOOP_CLASSPATH environment variable or have hadoop in your
> classpath. For more information refer to the "Deployment" section of the
> official Apache Flink documentation.
>         at
> org.apache.flink.yarn.cli.FallbackYarnSessionCli.isActive(FallbackYarnSessionCli.java:41)
>         at
> org.apache.flink.client.cli.CliFrontend.validateAndGetActiveCommandLine(CliFrontend.java:1236)
>         at
> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:234)
>         at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
>         at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>         at
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>         at
> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
>
>
> 运行命令 hadoop classpath
> xxxx@192 flink-1.12.2 % hadoop classpath
>
> /Users/xxxx/local/hadoop/hadoop-3.2.2/etc/hadoop:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/xxxx/local/hadoop/hadoop-3.2.2/share/hadoop/yarn/*:/Users/xxxx/local/hadoop/hadoop-3.2.2
> 反复配置HADOOP_CLASSPATH 无法生效 官网给出的
> export HADOOP_CLASSPATH=`hadoop classpath`
> 这个 hadoop classpath 具体配置到哪一级
>
>
>
>
>
>
>
>