You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Luigi Di Fraia (JIRA)" <ji...@apache.org> on 2017/07/10 13:26:03 UTC

[jira] [Created] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

Luigi Di Fraia created HDFS-12109:
-------------------------------------

             Summary: "fs" java.net.UnknownHostException when HA NameNode is used
                 Key: HDFS-12109
                 URL: https://issues.apache.org/jira/browse/HDFS-12109
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: fs
    Affects Versions: 2.8.0
         Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[hadoop@namenode01 ~]$ uname -a
Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
            Reporter: Luigi Di Fraia


After setting up an HA NameNode configuration, the following invocation of "fs" fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as per below:

    <property>
        <name>dfs.nameservices</name>
        <value>saccluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.saccluster</name>
        <value>namenode01,namenode02</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.saccluster.namenode01</name>
        <value>namenode01:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.saccluster.namenode02</name>
        <value>namenode02:8020</value>
    </property>
        <property>
        <name>dfs.namenode.http-address.saccluster.namenode01</name>
        <value>namenode01:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.saccluster.namenode02</name>
        <value>namenode02:50070</value>
    </property>
        <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate client configuration file?

Apologies if I a missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org