You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by "casel.chen" <ca...@126.com> on 2021/01/20 02:21:32 UTC

flink yarn application提交作业问题

使用了如下命令来提交flink作业到yarn上运行,结果出错。如果job jar路径改成本地的就没有问题。我已经将 flink-oss-fs-hadoop-1.12.0.jar 放到flink lib目录下面,并且在flink.conf配置文件中设置好了oss参数。试问,这种作业jar在远端的分布式文件系统flink难道不支持吗?


./bin/flink run-application -t yarn-application \

  -Dyarn.provided.lib.dirs="oss://odps-prd/rtdp/flinkLib" \

  oss://odps-prd/rtdp/flinkJobs/TopSpeedWindowing.jar



------------------------------------------------------------

 The program finished with the following exception:




org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster

at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:443)

at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)

at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:207)

at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:974)

at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)

at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047)

Caused by: java.io.IOException: No FileSystem for scheme: oss

at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)

at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)

at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)

at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)

at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)

at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)

at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)

at org.apache.flink.yarn.Utils.lambda$getQualifiedRemoteSharedPaths$1(Utils.java:577)

at org.apache.flink.configuration.ConfigUtils.decodeListFromConfig(ConfigUtils.java:127)

at org.apache.flink.yarn.Utils.getRemoteSharedPaths(Utils.java:585)

at org.apache.flink.yarn.Utils.getQualifiedRemoteSharedPaths(Utils.java:573)

at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:708)

at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:558)

at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:436)

... 9 more

Re: flink yarn application提交作业问题

Posted by Yang Wang <da...@gmail.com>.
目前user jar是可以支持远程,但是只能是hadoop compatiable的schema
因为远程的这个user jar并不会下载到Flink client本地,而是直接注册为Yarn的local resource来使用

所以你的这个报错是预期内的,还没有办法支持

Best,
Yang

casel.chen <ca...@126.com> 于2021年1月20日周三 上午10:23写道:

> ./bin/flink run-application -t yarn-application \
>
>   -Dyarn.provided.lib.dirs="hdfs://localhost:9000/flinkLib" \
>
>   hdfs://localhost:9000/flinkJobs/TopSpeedWindowing.jar
>
>
> 这种命令执行方式是可以执行的。
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2021-01-20 10:21:32,"casel.chen" <ca...@126.com> 写道:
> >使用了如下命令来提交flink作业到yarn上运行,结果出错。如果job jar路径改成本地的就没有问题。我已经将
> flink-oss-fs-hadoop-1.12.0.jar 放到flink
> lib目录下面,并且在flink.conf配置文件中设置好了oss参数。试问,这种作业jar在远端的分布式文件系统flink难道不支持吗?
> >
> >
> >./bin/flink run-application -t yarn-application \
> >
> >  -Dyarn.provided.lib.dirs="oss://odps-prd/rtdp/flinkLib" \
> >
> >  oss://odps-prd/rtdp/flinkJobs/TopSpeedWindowing.jar
> >
> >
> >
> >------------------------------------------------------------
> >
> > The program finished with the following exception:
> >
> >
> >
> >
> >org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
> deploy Yarn Application Cluster
> >
> >at
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:443)
> >
> >at
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)
> >
> >at
> org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:207)
> >
> >at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:974)
> >
> >at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)
> >
> >at java.security.AccessController.doPrivileged(Native Method)
> >
> >at javax.security.auth.Subject.doAs(Subject.java:422)
> >
> >at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> >
> >at
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> >
> >at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047)
> >
> >Caused by: java.io.IOException: No FileSystem for scheme: oss
> >
> >at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)
> >
> >at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
> >
> >at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
> >
> >at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
> >
> >at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
> >
> >at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
> >
> >at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
> >
> >at
> org.apache.flink.yarn.Utils.lambda$getQualifiedRemoteSharedPaths$1(Utils.java:577)
> >
> >at
> org.apache.flink.configuration.ConfigUtils.decodeListFromConfig(ConfigUtils.java:127)
> >
> >at org.apache.flink.yarn.Utils.getRemoteSharedPaths(Utils.java:585)
> >
> >at
> org.apache.flink.yarn.Utils.getQualifiedRemoteSharedPaths(Utils.java:573)
> >
> >at
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:708)
> >
> >at
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:558)
> >
> >at
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:436)
> >
> >... 9 more
>

Re:flink yarn application提交作业问题

Posted by "casel.chen" <ca...@126.com>.
./bin/flink run-application -t yarn-application \

  -Dyarn.provided.lib.dirs="hdfs://localhost:9000/flinkLib" \

  hdfs://localhost:9000/flinkJobs/TopSpeedWindowing.jar


这种命令执行方式是可以执行的。

















在 2021-01-20 10:21:32,"casel.chen" <ca...@126.com> 写道:
>使用了如下命令来提交flink作业到yarn上运行,结果出错。如果job jar路径改成本地的就没有问题。我已经将 flink-oss-fs-hadoop-1.12.0.jar 放到flink lib目录下面,并且在flink.conf配置文件中设置好了oss参数。试问,这种作业jar在远端的分布式文件系统flink难道不支持吗?
>
>
>./bin/flink run-application -t yarn-application \
>
>  -Dyarn.provided.lib.dirs="oss://odps-prd/rtdp/flinkLib" \
>
>  oss://odps-prd/rtdp/flinkJobs/TopSpeedWindowing.jar
>
>
>
>------------------------------------------------------------
>
> The program finished with the following exception:
>
>
>
>
>org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster
>
>at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:443)
>
>at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)
>
>at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:207)
>
>at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:974)
>
>at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)
>
>at java.security.AccessController.doPrivileged(Native Method)
>
>at javax.security.auth.Subject.doAs(Subject.java:422)
>
>at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>
>at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>
>at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047)
>
>Caused by: java.io.IOException: No FileSystem for scheme: oss
>
>at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)
>
>at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>
>at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
>
>at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
>
>at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
>
>at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
>
>at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>
>at org.apache.flink.yarn.Utils.lambda$getQualifiedRemoteSharedPaths$1(Utils.java:577)
>
>at org.apache.flink.configuration.ConfigUtils.decodeListFromConfig(ConfigUtils.java:127)
>
>at org.apache.flink.yarn.Utils.getRemoteSharedPaths(Utils.java:585)
>
>at org.apache.flink.yarn.Utils.getQualifiedRemoteSharedPaths(Utils.java:573)
>
>at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:708)
>
>at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:558)
>
>at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:436)
>
>... 9 more