You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Lian Jiang <ji...@gmail.com> on 2018/07/18 20:28:29 UTC

HA yarn does not recognize HA hdfs uri

Hi,

I am enabling HA for hdfs and yarn on my hdp2.6 cluster. HDFS can start but
yarn cannot due to error:

2018-07-18 18:11:23,967 FATAL
applicationhistoryservice.ApplicationHistoryServer
(ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error
starting ApplicationHistoryServer
java.lang.IllegalArgumentException: java.net.UnknownHostException:
test-cluster
        at org.apache.hadoop.security.SecurityUtil.buildTokenService(
SecurityUtil.java:439)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(
NameNodeProxies.java:321)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(
NameNodeProxies.java:176)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:631)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(
DistributedFileSystem.java:160)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(
FileSystem.java:2795)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
FileSystem.java:2829)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:374)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at org.apache.hadoop.yarn.server.timeline.
EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:209)
        at org.apache.hadoop.service.AbstractService.init(
AbstractService.java:163)
        at org.apache.hadoop.service.CompositeService.serviceInit(
CompositeService.java:107)
        at org.apache.hadoop.yarn.server.applicationhistoryservice.
ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
        at org.apache.hadoop.service.AbstractService.init(
AbstractService.java:163)
        at org.apache.hadoop.yarn.server.applicationhistoryservice.
ApplicationHistoryServer.launchAppHistoryServer(
ApplicationHistoryServer.java:174)
        at org.apache.hadoop.yarn.server.applicationhistoryservice.
ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)


My current fs.defaultFS is "hdfs://test-cluster" - the HA hdfs uri. Looks
like yarn cannot recognize this uri.

I found an example at https://cwiki.apache.org/confluence/display/AMBARI/
Blueprint+Support+for+HA+Clusters which uses
fs.defaultFS=hdfs://%HOSTGROUP::master_1%:8020
which is not the HA hdfs uri. Plus, this example uses different blueprints
for HA hdfs and HA yarn while I use one blueprint for both.

My questions:
1. should I separate the blueprints for HA Hdfs and HA yarn as above
example? Or one blueprint for both that I am currently using is fine?

2. can yarn use the HA hdfs uri instead of pointing to a specific namenode?
The latter seems to break HA.

Appreciate your hints.

Re: HA yarn does not recognize HA hdfs uri

Posted by Lian Jiang <ji...@gmail.com>.
Thanks Jeff.

But the link above has the HA URI fs.defaultFS
<https://cwiki.apache.org/confluence/display/AMBARI/fs.defaultFS>=hdfs://mycluster
in the HA hdfs blueprint. Any idea why it does not have port? Thanks.

On Wed, Jul 18, 2018 at 2:07 PM, Jeff Hubbs <jh...@att.net> wrote:

> Lian -
>
> Your value of fs.defaultFS is supposed to have a port number, e.g.
> hdfs://test-cluster:9000.
>
> On 7/18/18 4:28 PM, Lian Jiang wrote:
>
> Hi,
>
> I am enabling HA for hdfs and yarn on my hdp2.6 cluster. HDFS can start
> but yarn cannot due to error:
>
> 2018-07-18 18:11:23,967 FATAL applicationhistoryservice.ApplicationHistoryServer
> (ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error
> starting ApplicationHistoryServer
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> test-cluster
>         at org.apache.hadoop.security.SecurityUtil.buildTokenService(Se
> curityUtil.java:439)
>         at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(Name
> NodeProxies.java:321)
>         at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeP
> roxies.java:176)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:631)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(Dist
> ributedFileSystem.java:160)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.
> java:2795)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem
> .java:2829)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:374)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>         at org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimeline
> Store.serviceInit(EntityGroupFSTimelineStore.java:209)
>         at org.apache.hadoop.service.AbstractService.init(AbstractServi
> ce.java:163)
>         at org.apache.hadoop.service.CompositeService.serviceInit(Compo
> siteService.java:107)
>         at org.apache.hadoop.yarn.server.applicationhistoryservice.Appl
> icationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
>         at org.apache.hadoop.service.AbstractService.init(AbstractServi
> ce.java:163)
>         at org.apache.hadoop.yarn.server.applicationhistoryservice.Appl
> icationHistoryServer.launchAppHistoryServer(ApplicationHisto
> ryServer.java:174)
>         at org.apache.hadoop.yarn.server.applicationhistoryservice.Appl
> icationHistoryServer.main(ApplicationHistoryServer.java:184)
>
>
> My current fs.defaultFS is "hdfs://test-cluster" - the HA hdfs uri. Looks
> like yarn cannot recognize this uri.
>
> I found an example at https://cwiki.apache.org/confl
> uence/display/AMBARI/Blueprint+Support+for+HA+Clusters which uses
> fs.defaultFS=hdfs://%HOSTGROUP::master_1%:8020 which is not the HA hdfs
> uri. Plus, this example uses different blueprints for HA hdfs and HA yarn
> while I use one blueprint for both.
>
> My questions:
> 1. should I separate the blueprints for HA Hdfs and HA yarn as above
> example? Or one blueprint for both that I am currently using is fine?
>
> 2. can yarn use the HA hdfs uri instead of pointing to a specific
> namenode? The latter seems to break HA.
>
> Appreciate your hints.
>
>
>

Re: HA yarn does not recognize HA hdfs uri

Posted by Jeff Hubbs <jh...@att.net>.
Lian -

Your value of fs.defaultFS is supposed to have a port number, e.g. 
hdfs://test-cluster:9000.

On 7/18/18 4:28 PM, Lian Jiang wrote:
> Hi,
>
> I am enabling HA for hdfs and yarn on my hdp2.6 cluster. HDFS can 
> start but yarn cannot due to error:
>
> 2018-07-18 18:11:23,967 FATAL 
> applicationhistoryservice.ApplicationHistoryServer 
> (ApplicationHistoryServer.java:launchAppHistoryServer(177)) - Error 
> starting ApplicationHistoryServer
> java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> test-cluster
>         at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:439)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:321)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:631)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:160)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2795)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:374)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>         at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.serviceInit(EntityGroupFSTimelineStore.java:209)
>         at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>         at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
>         at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(ApplicationHistoryServer.java:111)
>         at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>         at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(ApplicationHistoryServer.java:174)
>         at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(ApplicationHistoryServer.java:184)
>
>
> My current fs.defaultFS is "hdfs://test-cluster" - the HA hdfs uri. 
> Looks like yarn cannot recognize this uri.
>
> I found an example at 
> https://cwiki.apache.org/confluence/display/AMBARI/Blueprint+Support+for+HA+Clusters 
> <https://cwiki.apache.org/confluence/display/AMBARI/Blueprint+Support+for+HA+Clusters> 
> which uses fs.defaultFS=hdfs://%HOSTGROUP::master_1%:8020 which is not 
> the HA hdfs uri. Plus, this example uses different blueprints for HA 
> hdfs and HA yarn while I use one blueprint for both.
>
> My questions:
> 1. should I separate the blueprints for HA Hdfs and HA yarn as above 
> example? Or one blueprint for both that I am currently using is fine?
>
> 2. can yarn use the HA hdfs uri instead of pointing to a specific 
> namenode? The latter seems to break HA.
>
> Appreciate your hints.