You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Tomomichi Hirano (JIRA)" <ji...@apache.org> on 2019/04/04 00:21:00 UTC

[jira] [Commented] (AMBARI-22271) Need hdfs-site for additional hiveserver2 with NameNode HA

    [ https://issues.apache.org/jira/browse/AMBARI-22271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809396#comment-16809396 ] 

Tomomichi Hirano commented on AMBARI-22271:
-------------------------------------------

Ambari 2.7.3 still has this issue.
Amabri installs HiveServer2 with followings when the cluster newly is created.
{noformat}
MapReduce2 Client
Tez Client
YARN Client
ZooKeeper Client
{noformat}
But Ambari doesn't install them when additinal HiveServer2 is added. Additinally, Tez Clinet depends on HDFS Client. So need to install HDFS Client as well for new HiveServer2.

> Need hdfs-site for additional hiveserver2 with NameNode HA
> ----------------------------------------------------------
>
>                 Key: AMBARI-22271
>                 URL: https://issues.apache.org/jira/browse/AMBARI-22271
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.5.2
>         Environment: CentOS7.3
> Ambari 2.5.2
> HDP 2.6.2
>            Reporter: Masahiro Tanaka
>            Priority: Major
>
> When I added HiveServer2 to a server which is not installed other components with NameNode HA env, I got an error below
> {code}
> java.lang.Error: Max start attempts 5 exhausted
>         at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:506)
>         at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
>         at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:
> 720)
>         at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.RuntimeException: Error applying authorization policy on hive configuration: j
> ava.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
>         at org.apache.hive.service.cli.CLIService.init(CLIService.java:117)
>         at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
>         at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:122)
>         at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:474)
>         ... 9 more
> Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: java.net.UnknownHostExce
> ption: mycluster
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:547)
>         at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:13
> 0)
>         at org.apache.hive.service.cli.CLIService.init(CLIService.java:115)
>         ... 12 more
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
>         at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:438)
>         at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:321)
>         at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:631)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:160)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2795)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
>         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:530)
>         ... 14 more
> Caused by: java.net.UnknownHostException: mycluster
>         ... 27 more
> 2017-10-20 01:27:04,136 INFO  [pool-1-thread-1]: server.HiveServer2 (HiveStringUtils.java:run(711))
>  - SHUTDOWN_MSG:
> {code}
> I looked into the environment, and noticed there is no hdfs-site.xml and thus it could not load 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider. That's might be the cause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)