You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Zhou Zach <wa...@163.com> on 2020/06/16 05:49:27 UTC

flink sql job 提交到yarn上报错

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint YarnJobClusterEntrypoint.

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)

at org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)

Caused by: java.io.IOException: Could not create FileSystem for highly available storage path (hdfs:/flink/ha/application_1592215995564_0027)

at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)

at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)

at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)

at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)

... 2 more

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.

at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)

at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)

at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)

at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)

... 13 more

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in the classpath, or some classes are missing from the classpath.

at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)

at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)

... 16 more

Caused by: java.lang.VerifyError: Bad return type

Exception Details:

  Location:

    org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage; @160: areturn

  Reason:

    Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0]) is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method signature)

  Current Frame:

    bci: @160

    flags: { }

    locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String', 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }

    stack: { 'org/apache/hadoop/fs/ContentSummary' }






在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,

Re:Re:Re: flink sql job 提交到yarn上报错

Posted by Zhou Zach <wa...@163.com>.
high-availability: zookeeper

















在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
>
>
>
>
>high-availability.storageDir: hdfs:///flink/ha/
>high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
>state.backend: filesystem
>state.checkpoints.dir: hdfs://nameservice1:8020//user/flink10/checkpoints
>state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
>high-availability.zookeeper.path.root: /flink
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
>>你的配置文件中ha配置可以贴下吗
>>
>>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
>>
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to
>>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>>>
>>> at
>>> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>>>
>>> Caused by: java.io.IOException: Could not create FileSystem for highly
>>> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
>>>
>>> at
>>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>>>
>>> at
>>> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>>>
>>> at
>>> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>>>
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>>
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>>>
>>> at
>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>>>
>>> ... 2 more
>>>
>>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>>> Could not find a file system implementation for scheme 'hdfs'. The scheme
>>> is not directly supported by Flink and no Hadoop file system to support
>>> this scheme could be loaded.
>>>
>>> at
>>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>>>
>>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>>>
>>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>>>
>>> at
>>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>>>
>>> ... 13 more
>>>
>>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in
>>> the classpath, or some classes are missing from the classpath.
>>>
>>> at
>>> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>>>
>>> at
>>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>>>
>>> ... 16 more
>>>
>>> Caused by: java.lang.VerifyError: Bad return type
>>>
>>> Exception Details:
>>>
>>>   Location:
>>>
>>>
>>> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
>>> @160: areturn
>>>
>>>   Reason:
>>>
>>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0])
>>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
>>> signature)
>>>
>>>   Current Frame:
>>>
>>>     bci: @160
>>>
>>>     flags: { }
>>>
>>>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
>>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>>>
>>>
>>>
>>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,
>>>
>>>
>>>
>>>

Re: Re:Re: flink sql job 提交到yarn上报错

Posted by Yang Wang <da...@gmail.com>.
你这个看着hadoop兼容导致的问题,ContentSummary这个类是从hadoop 2.8以后发生了
变化。所以你需要确认你的lib下带的flink-shaded-hadoop与hdfs集群的版本是兼容的


Best,
Yang

Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午2:53写道:

> flink/lib/下的jar:
> flink-connector-hive_2.11-1.10.0.jar
> flink-dist_2.11-1.10.0.jar
> flink-jdbc_2.11-1.10.0.jar
> flink-json-1.10.0.jar
> flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
> flink-sql-connector-kafka_2.11-1.10.0.jar
> flink-table_2.11-1.10.0.jar
> flink-table-blink_2.11-1.10.0.jar
> hbase-client-2.1.0.jar
> hbase-common-2.1.0.jar
> hive-exec-2.1.1.jar
> mysql-connector-java-5.1.49.jar
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
> >
> >
> >
> >
> >high-availability.storageDir: hdfs:///flink/ha/
> >high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
> >state.backend: filesystem
> >state.checkpoints.dir: hdfs://nameservice1:8020//user/flink10/checkpoints
> >state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
> >high-availability.zookeeper.path.root: /flink
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
> >>你的配置文件中ha配置可以贴下吗
> >>
> >>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
> >>
> >>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed
> to
> >>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> >>>
> >>> at
> >>>
> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
> >>>
> >>> Caused by: java.io.IOException: Could not create FileSystem for highly
> >>> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> >>>
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>>
> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
> >>>
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> >>>
> >>> ... 2 more
> >>>
> >>> Caused by:
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> >>> Could not find a file system implementation for scheme 'hdfs'. The
> scheme
> >>> is not directly supported by Flink and no Hadoop file system to support
> >>> this scheme could be loaded.
> >>>
> >>> at
> >>>
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
> >>>
> >>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> >>>
> >>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> >>>
> >>> ... 13 more
> >>>
> >>> Caused by:
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> >>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is
> not in
> >>> the classpath, or some classes are missing from the classpath.
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
> >>>
> >>> at
> >>>
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
> >>>
> >>> ... 16 more
> >>>
> >>> Caused by: java.lang.VerifyError: Bad return type
> >>>
> >>> Exception Details:
> >>>
> >>>   Location:
> >>>
> >>>
> >>>
> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
> >>> @160: areturn
> >>>
> >>>   Reason:
> >>>
> >>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
> stack[0])
> >>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
> >>> signature)
> >>>
> >>>   Current Frame:
> >>>
> >>>     bci: @160
> >>>
> >>>     flags: { }
> >>>
> >>>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
> >>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
> >>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
> >>>
> >>>
> >>>
> >>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink
> lib目录下,
> >>>
> >>>
> >>>
> >>>
>

Re: Re:Re: Re: Re:Re: flink sql job 提交到yarn上报错

Posted by JasonLee <17...@163.com>.
hi

先执行一下 export HADOOP_CLASSPATH=`hadoop classpath` 就可以了



-----
Best Wishes
JasonLee
--
Sent from: http://apache-flink.147419.n8.nabble.com/

Re: Re:Re: Re: Re:Re: flink sql job 提交到yarn上报错

Posted by yangpengyi <96...@qq.com>.
请问该问题有解决吗?我使用FLINK yarn-per-job方式提交到yarn集群也出现了这个错误



--
Sent from: http://apache-flink.147419.n8.nabble.com/

Re:Re: Re: Re:Re: flink sql job 提交到yarn上报错

Posted by Zhou Zach <wa...@163.com>.
有输出的

















在 2020-06-16 15:24:29,"王松" <sd...@gmail.com> 写道:
>那你在命令行执行:hadoop classpath,有hadoop的classpath输出吗?
>
>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午3:22写道:
>
>>
>>
>>
>>
>>
>>
>> 在/etc/profile下,目前只加了
>> export HADOOP_CLASSPATH=`hadoop classpath`
>> 我是安装的CDH,没找到sbin这个文件。。
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 在 2020-06-16 15:05:12,"王松" <sd...@gmail.com> 写道:
>> >你配置HADOOP_HOME和HADOOP_CLASSPATH这两个环境变量了吗?
>> >
>> >export HADOOP_HOME=/usr/local/hadoop-2.7.2
>> >export HADOOP_CLASSPATH=`hadoop classpath`
>> >export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
>> >
>> >Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午2:53写道:
>> >
>> >> flink/lib/下的jar:
>> >> flink-connector-hive_2.11-1.10.0.jar
>> >> flink-dist_2.11-1.10.0.jar
>> >> flink-jdbc_2.11-1.10.0.jar
>> >> flink-json-1.10.0.jar
>> >> flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
>> >> flink-sql-connector-kafka_2.11-1.10.0.jar
>> >> flink-table_2.11-1.10.0.jar
>> >> flink-table-blink_2.11-1.10.0.jar
>> >> hbase-client-2.1.0.jar
>> >> hbase-common-2.1.0.jar
>> >> hive-exec-2.1.1.jar
>> >> mysql-connector-java-5.1.49.jar
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> 在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >high-availability.storageDir: hdfs:///flink/ha/
>> >> >high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
>> >> >state.backend: filesystem
>> >> >state.checkpoints.dir:
>> hdfs://nameservice1:8020//user/flink10/checkpoints
>> >> >state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
>> >> >high-availability.zookeeper.path.root: /flink
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
>> >> >>你的配置文件中ha配置可以贴下吗
>> >> >>
>> >> >>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
>> >> >>
>> >> >>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException:
>> Failed
>> >> to
>> >> >>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>> >> >>>
>> >> >>> Caused by: java.io.IOException: Could not create FileSystem for
>> highly
>> >> >>> available storage path
>> (hdfs:/flink/ha/application_1592215995564_0027)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>> >> >>>
>> >> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >> >>>
>> >> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>> >> >>>
>> >> >>> ... 2 more
>> >> >>>
>> >> >>> Caused by:
>> >> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> >> >>> Could not find a file system implementation for scheme 'hdfs'. The
>> >> scheme
>> >> >>> is not directly supported by Flink and no Hadoop file system to
>> support
>> >> >>> this scheme could be loaded.
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>> >> >>>
>> >> >>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>> >> >>>
>> >> >>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>> >> >>>
>> >> >>> ... 13 more
>> >> >>>
>> >> >>> Caused by:
>> >> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> >> >>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is
>> >> not in
>> >> >>> the classpath, or some classes are missing from the classpath.
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>> >> >>>
>> >> >>> at
>> >> >>>
>> >>
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>> >> >>>
>> >> >>> ... 16 more
>> >> >>>
>> >> >>> Caused by: java.lang.VerifyError: Bad return type
>> >> >>>
>> >> >>> Exception Details:
>> >> >>>
>> >> >>>   Location:
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
>> >> >>> @160: areturn
>> >> >>>
>> >> >>>   Reason:
>> >> >>>
>> >> >>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
>> >> stack[0])
>> >> >>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
>> >> >>> signature)
>> >> >>>
>> >> >>>   Current Frame:
>> >> >>>
>> >> >>>     bci: @160
>> >> >>>
>> >> >>>     flags: { }
>> >> >>>
>> >> >>>     locals: { 'org/apache/hadoop/hdfs/DFSClient',
>> 'java/lang/String',
>> >> >>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>> >> >>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink
>> >> lib目录下,
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>>

Re: Re: Re:Re: flink sql job 提交到yarn上报错

Posted by 王松 <sd...@gmail.com>.
那你在命令行执行:hadoop classpath,有hadoop的classpath输出吗?

Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午3:22写道:

>
>
>
>
>
>
> 在/etc/profile下,目前只加了
> export HADOOP_CLASSPATH=`hadoop classpath`
> 我是安装的CDH,没找到sbin这个文件。。
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-06-16 15:05:12,"王松" <sd...@gmail.com> 写道:
> >你配置HADOOP_HOME和HADOOP_CLASSPATH这两个环境变量了吗?
> >
> >export HADOOP_HOME=/usr/local/hadoop-2.7.2
> >export HADOOP_CLASSPATH=`hadoop classpath`
> >export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
> >
> >Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午2:53写道:
> >
> >> flink/lib/下的jar:
> >> flink-connector-hive_2.11-1.10.0.jar
> >> flink-dist_2.11-1.10.0.jar
> >> flink-jdbc_2.11-1.10.0.jar
> >> flink-json-1.10.0.jar
> >> flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
> >> flink-sql-connector-kafka_2.11-1.10.0.jar
> >> flink-table_2.11-1.10.0.jar
> >> flink-table-blink_2.11-1.10.0.jar
> >> hbase-client-2.1.0.jar
> >> hbase-common-2.1.0.jar
> >> hive-exec-2.1.1.jar
> >> mysql-connector-java-5.1.49.jar
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> 在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
> >> >
> >> >
> >> >
> >> >
> >> >high-availability.storageDir: hdfs:///flink/ha/
> >> >high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
> >> >state.backend: filesystem
> >> >state.checkpoints.dir:
> hdfs://nameservice1:8020//user/flink10/checkpoints
> >> >state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
> >> >high-availability.zookeeper.path.root: /flink
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
> >> >>你的配置文件中ha配置可以贴下吗
> >> >>
> >> >>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
> >> >>
> >> >>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException:
> Failed
> >> to
> >> >>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
> >> >>>
> >> >>> Caused by: java.io.IOException: Could not create FileSystem for
> highly
> >> >>> available storage path
> (hdfs:/flink/ha/application_1592215995564_0027)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> >> >>>
> >> >>> at java.security.AccessController.doPrivileged(Native Method)
> >> >>>
> >> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> >> >>>
> >> >>> ... 2 more
> >> >>>
> >> >>> Caused by:
> >> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> >> >>> Could not find a file system implementation for scheme 'hdfs'. The
> >> scheme
> >> >>> is not directly supported by Flink and no Hadoop file system to
> support
> >> >>> this scheme could be loaded.
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
> >> >>>
> >> >>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> >> >>>
> >> >>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> >> >>>
> >> >>> ... 13 more
> >> >>>
> >> >>> Caused by:
> >> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> >> >>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is
> >> not in
> >> >>> the classpath, or some classes are missing from the classpath.
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
> >> >>>
> >> >>> at
> >> >>>
> >>
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
> >> >>>
> >> >>> ... 16 more
> >> >>>
> >> >>> Caused by: java.lang.VerifyError: Bad return type
> >> >>>
> >> >>> Exception Details:
> >> >>>
> >> >>>   Location:
> >> >>>
> >> >>>
> >> >>>
> >>
> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
> >> >>> @160: areturn
> >> >>>
> >> >>>   Reason:
> >> >>>
> >> >>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
> >> stack[0])
> >> >>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
> >> >>> signature)
> >> >>>
> >> >>>   Current Frame:
> >> >>>
> >> >>>     bci: @160
> >> >>>
> >> >>>     flags: { }
> >> >>>
> >> >>>     locals: { 'org/apache/hadoop/hdfs/DFSClient',
> 'java/lang/String',
> >> >>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
> >> >>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
> >> >>>
> >> >>>
> >> >>>
> >> >>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink
> >> lib目录下,
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >>
>

Re:Re: Re:Re: flink sql job 提交到yarn上报错

Posted by Zhou Zach <wa...@163.com>.





在/etc/profile下,目前只加了
export HADOOP_CLASSPATH=`hadoop classpath`
我是安装的CDH,没找到sbin这个文件。。











在 2020-06-16 15:05:12,"王松" <sd...@gmail.com> 写道:
>你配置HADOOP_HOME和HADOOP_CLASSPATH这两个环境变量了吗?
>
>export HADOOP_HOME=/usr/local/hadoop-2.7.2
>export HADOOP_CLASSPATH=`hadoop classpath`
>export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
>
>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午2:53写道:
>
>> flink/lib/下的jar:
>> flink-connector-hive_2.11-1.10.0.jar
>> flink-dist_2.11-1.10.0.jar
>> flink-jdbc_2.11-1.10.0.jar
>> flink-json-1.10.0.jar
>> flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
>> flink-sql-connector-kafka_2.11-1.10.0.jar
>> flink-table_2.11-1.10.0.jar
>> flink-table-blink_2.11-1.10.0.jar
>> hbase-client-2.1.0.jar
>> hbase-common-2.1.0.jar
>> hive-exec-2.1.1.jar
>> mysql-connector-java-5.1.49.jar
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
>> >
>> >
>> >
>> >
>> >high-availability.storageDir: hdfs:///flink/ha/
>> >high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
>> >state.backend: filesystem
>> >state.checkpoints.dir: hdfs://nameservice1:8020//user/flink10/checkpoints
>> >state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
>> >high-availability.zookeeper.path.root: /flink
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
>> >>你的配置文件中ha配置可以贴下吗
>> >>
>> >>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
>> >>
>> >>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed
>> to
>> >>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>> >>>
>> >>> Caused by: java.io.IOException: Could not create FileSystem for highly
>> >>> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>> >>>
>> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >>>
>> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> >>>
>> >>> at
>> >>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>> >>>
>> >>> ... 2 more
>> >>>
>> >>> Caused by:
>> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> >>> Could not find a file system implementation for scheme 'hdfs'. The
>> scheme
>> >>> is not directly supported by Flink and no Hadoop file system to support
>> >>> this scheme could be loaded.
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>> >>>
>> >>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>> >>>
>> >>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>> >>>
>> >>> ... 13 more
>> >>>
>> >>> Caused by:
>> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> >>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is
>> not in
>> >>> the classpath, or some classes are missing from the classpath.
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>> >>>
>> >>> at
>> >>>
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>> >>>
>> >>> ... 16 more
>> >>>
>> >>> Caused by: java.lang.VerifyError: Bad return type
>> >>>
>> >>> Exception Details:
>> >>>
>> >>>   Location:
>> >>>
>> >>>
>> >>>
>> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
>> >>> @160: areturn
>> >>>
>> >>>   Reason:
>> >>>
>> >>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
>> stack[0])
>> >>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
>> >>> signature)
>> >>>
>> >>>   Current Frame:
>> >>>
>> >>>     bci: @160
>> >>>
>> >>>     flags: { }
>> >>>
>> >>>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
>> >>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>> >>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>> >>>
>> >>>
>> >>>
>> >>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink
>> lib目录下,
>> >>>
>> >>>
>> >>>
>> >>>
>>

Re: Re:Re: flink sql job 提交到yarn上报错

Posted by 王松 <sd...@gmail.com>.
你配置HADOOP_HOME和HADOOP_CLASSPATH这两个环境变量了吗?

export HADOOP_HOME=/usr/local/hadoop-2.7.2
export HADOOP_CLASSPATH=`hadoop classpath`
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午2:53写道:

> flink/lib/下的jar:
> flink-connector-hive_2.11-1.10.0.jar
> flink-dist_2.11-1.10.0.jar
> flink-jdbc_2.11-1.10.0.jar
> flink-json-1.10.0.jar
> flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
> flink-sql-connector-kafka_2.11-1.10.0.jar
> flink-table_2.11-1.10.0.jar
> flink-table-blink_2.11-1.10.0.jar
> hbase-client-2.1.0.jar
> hbase-common-2.1.0.jar
> hive-exec-2.1.1.jar
> mysql-connector-java-5.1.49.jar
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
> >
> >
> >
> >
> >high-availability.storageDir: hdfs:///flink/ha/
> >high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
> >state.backend: filesystem
> >state.checkpoints.dir: hdfs://nameservice1:8020//user/flink10/checkpoints
> >state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
> >high-availability.zookeeper.path.root: /flink
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
> >>你的配置文件中ha配置可以贴下吗
> >>
> >>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
> >>
> >>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed
> to
> >>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> >>>
> >>> at
> >>>
> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
> >>>
> >>> Caused by: java.io.IOException: Could not create FileSystem for highly
> >>> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> >>>
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>>
> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
> >>>
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> >>>
> >>> ... 2 more
> >>>
> >>> Caused by:
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> >>> Could not find a file system implementation for scheme 'hdfs'. The
> scheme
> >>> is not directly supported by Flink and no Hadoop file system to support
> >>> this scheme could be loaded.
> >>>
> >>> at
> >>>
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
> >>>
> >>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> >>>
> >>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> >>>
> >>> ... 13 more
> >>>
> >>> Caused by:
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> >>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is
> not in
> >>> the classpath, or some classes are missing from the classpath.
> >>>
> >>> at
> >>>
> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
> >>>
> >>> at
> >>>
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
> >>>
> >>> ... 16 more
> >>>
> >>> Caused by: java.lang.VerifyError: Bad return type
> >>>
> >>> Exception Details:
> >>>
> >>>   Location:
> >>>
> >>>
> >>>
> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
> >>> @160: areturn
> >>>
> >>>   Reason:
> >>>
> >>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame,
> stack[0])
> >>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
> >>> signature)
> >>>
> >>>   Current Frame:
> >>>
> >>>     bci: @160
> >>>
> >>>     flags: { }
> >>>
> >>>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
> >>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
> >>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
> >>>
> >>>
> >>>
> >>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink
> lib目录下,
> >>>
> >>>
> >>>
> >>>
>

Re:Re:Re: flink sql job 提交到yarn上报错

Posted by Zhou Zach <wa...@163.com>.
flink/lib/下的jar:
flink-connector-hive_2.11-1.10.0.jar
flink-dist_2.11-1.10.0.jar
flink-jdbc_2.11-1.10.0.jar
flink-json-1.10.0.jar
flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar
flink-sql-connector-kafka_2.11-1.10.0.jar
flink-table_2.11-1.10.0.jar
flink-table-blink_2.11-1.10.0.jar
hbase-client-2.1.0.jar
hbase-common-2.1.0.jar
hive-exec-2.1.1.jar
mysql-connector-java-5.1.49.jar

















在 2020-06-16 14:48:43,"Zhou Zach" <wa...@163.com> 写道:
>
>
>
>
>high-availability.storageDir: hdfs:///flink/ha/
>high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
>state.backend: filesystem
>state.checkpoints.dir: hdfs://nameservice1:8020//user/flink10/checkpoints
>state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
>high-availability.zookeeper.path.root: /flink
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
>>你的配置文件中ha配置可以贴下吗
>>
>>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
>>
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to
>>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>>>
>>> at
>>> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>>>
>>> Caused by: java.io.IOException: Could not create FileSystem for highly
>>> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
>>>
>>> at
>>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>>>
>>> at
>>> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>>>
>>> at
>>> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>>>
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>>
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>>>
>>> at
>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>
>>> at
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>>>
>>> ... 2 more
>>>
>>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>>> Could not find a file system implementation for scheme 'hdfs'. The scheme
>>> is not directly supported by Flink and no Hadoop file system to support
>>> this scheme could be loaded.
>>>
>>> at
>>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>>>
>>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>>>
>>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>>>
>>> at
>>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>>>
>>> ... 13 more
>>>
>>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in
>>> the classpath, or some classes are missing from the classpath.
>>>
>>> at
>>> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>>>
>>> at
>>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>>>
>>> ... 16 more
>>>
>>> Caused by: java.lang.VerifyError: Bad return type
>>>
>>> Exception Details:
>>>
>>>   Location:
>>>
>>>
>>> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
>>> @160: areturn
>>>
>>>   Reason:
>>>
>>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0])
>>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
>>> signature)
>>>
>>>   Current Frame:
>>>
>>>     bci: @160
>>>
>>>     flags: { }
>>>
>>>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
>>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>>>
>>>
>>>
>>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,
>>>
>>>
>>>
>>>

Re:Re: flink sql job 提交到yarn上报错

Posted by Zhou Zach <wa...@163.com>.



high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
state.backend: filesystem
state.checkpoints.dir: hdfs://nameservice1:8020//user/flink10/checkpoints
state.savepoints.dir: hdfs://nameservice1:8020//user/flink10/savepoints
high-availability.zookeeper.path.root: /flink

















在 2020-06-16 14:44:02,"王松" <sd...@gmail.com> 写道:
>你的配置文件中ha配置可以贴下吗
>
>Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:
>
>> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to
>> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>>
>> at
>> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>>
>> Caused by: java.io.IOException: Could not create FileSystem for highly
>> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
>>
>> at
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>>
>> at
>> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>>
>> at
>> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>>
>> at java.security.AccessController.doPrivileged(Native Method)
>>
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>>
>> at
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>
>> at
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>>
>> ... 2 more
>>
>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> Could not find a file system implementation for scheme 'hdfs'. The scheme
>> is not directly supported by Flink and no Hadoop file system to support
>> this scheme could be loaded.
>>
>> at
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>>
>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>>
>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>>
>> at
>> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>>
>> ... 13 more
>>
>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in
>> the classpath, or some classes are missing from the classpath.
>>
>> at
>> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>>
>> at
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>>
>> ... 16 more
>>
>> Caused by: java.lang.VerifyError: Bad return type
>>
>> Exception Details:
>>
>>   Location:
>>
>>
>> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
>> @160: areturn
>>
>>   Reason:
>>
>>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0])
>> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
>> signature)
>>
>>   Current Frame:
>>
>>     bci: @160
>>
>>     flags: { }
>>
>>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
>> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>>
>>
>>
>> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,
>>
>>
>>
>>

Re: flink sql job 提交到yarn上报错

Posted by 王松 <sd...@gmail.com>.
你的配置文件中ha配置可以贴下吗

Zhou Zach <wa...@163.com> 于2020年6月16日周二 下午1:49写道:

> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to
> initialize the cluster entrypoint YarnJobClusterEntrypoint.
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
>
> at
> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)
>
> Caused by: java.io.IOException: Could not create FileSystem for highly
> available storage path (hdfs:/flink/ha/application_1592215995564_0027)
>
> at
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)
>
> at
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
>
> at
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:422)
>
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>
> at
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
>
> ... 2 more
>
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> Could not find a file system implementation for scheme 'hdfs'. The scheme
> is not directly supported by Flink and no Hadoop file system to support
> this scheme could be loaded.
>
> at
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
>
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
>
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
>
> at
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
>
> ... 13 more
>
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in
> the classpath, or some classes are missing from the classpath.
>
> at
> org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)
>
> at
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
>
> ... 16 more
>
> Caused by: java.lang.VerifyError: Bad return type
>
> Exception Details:
>
>   Location:
>
>
> org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
> @160: areturn
>
>   Reason:
>
>     Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0])
> is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method
> signature)
>
>   Current Frame:
>
>     bci: @160
>
>     flags: { }
>
>     locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String',
> 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }
>     stack: { 'org/apache/hadoop/fs/ContentSummary' }
>
>
>
> 在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,
>
>
>
>

Re:flink sql job 提交到yarn上报错

Posted by Zhou Zach <wa...@163.com>.
将flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar放在flink/lib目录下,或者打入fat jar都不起作用。。。
















At 2020-06-16 13:49:27, "Zhou Zach" <wa...@163.com> wrote:

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint YarnJobClusterEntrypoint.

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)

at org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)

Caused by: java.io.IOException: Could not create FileSystem for highly available storage path (hdfs:/flink/ha/application_1592215995564_0027)

at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)

at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)

at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)

at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)

... 2 more

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.

at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)

at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)

at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)

at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)

... 13 more

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in the classpath, or some classes are missing from the classpath.

at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)

at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)

... 16 more

Caused by: java.lang.VerifyError: Bad return type

Exception Details:

  Location:

    org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage; @160: areturn

  Reason:

    Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0]) is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method signature)

  Current Frame:

    bci: @160

    flags: { }

    locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String', 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }

    stack: { 'org/apache/hadoop/fs/ContentSummary' }






在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,