You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Arpit Agarwal <aa...@cloudera.com.INVALID> on 2019/05/02 14:45:38 UTC

Re: Issue formatting Namenode in HA cluster using Kerberos

You can use /etc/hosts entries as a workaround.

If this is a PoC/test environment, a less secure workaround is host-less principals. i.e. omit the _HOST pattern. Not usually recommended since service instances will use the same principal and it may be easier to impersonate a service if the keytab is compromised.


> On Apr 30, 2019, at 1:11 PM, Adam Jorgensen <ad...@spandigital.com> wrote:
> 
> Ahhhhh....reverse DNS you say............oh dear
> 
> As per https://github.com/docker/for-linux/issues/365 <https://github.com/docker/for-linux/issues/365> it seems that Reverse DNS has been rather broken in Docker since early 2018 :-(
> 
> I'm going to have to do some digging to see if I can find some way to fix this I guess, since the other option is a painful dance involving registering Kerberos principals for the specific IPs
> 
> On Tue, Apr 30, 2019 at 9:56 PM Arpit Agarwal <aagarwal@cloudera.com <ma...@cloudera.com>> wrote:
> Likely your reverse DNS is not configured properly. You can check it by running ‘dig -x 10.0.0.238’.
> 
> 
>> On Apr 30, 2019, at 12:24 PM, Adam Jorgensen <adam.jorgensen@spandigital.com <ma...@spandigital.com>> wrote:
>> 
>> Hi all, my first post here. I'm looking for some help with an issue I'm having attempting to format my Namenode. I'm running a HA configuration and have configured Kerberos for authentication. 
>> Additionally, I am running Hadoop using Docker Swarm.
>> 
>> The issue I'm having is that when I attempt to format the Namenode the operation fails with complaints that QJM Journalnode do not have a valid Kerberos principal. However, the issue is more specific in that it seems like the operation to format the Journalnodes attempts to use a Kerberos principal of the form SERVICE/IP@REALM whereas the principals I have configured use the hostname rather than the IP.
>> 
>> If you take a look at the logging output captured below you will get a better idea of what the issue is.
>> 
>> Has anyone run into this before? Is there a way I can tell the Namenode format to use the correct principals?
>> 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:54,107 INFO namenode.NameNode: STARTUP_MSG:  
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | /************************************************************ 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG: Starting NameNode 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   host = hdfs-namenode1/10.0.0.60 <http://10.0.0.60/> 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   args = [-format, -nonInteractive] 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   version = 3.1.2 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   classpath = /opt/hadoop-latest/etc/hadoop:/opt/hadoop-latest/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/sha
>> re/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-lates
>> t/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/had
>> oop-latest/share/hadoop/common/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-l
>> atest/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:
>> /opt/hadoop-latest/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-databi
>> nd-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/js
>> on-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/co
>> mmon/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/
>> kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby
>> -asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/stax2-api-3.1.
>> 4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr311-api-1.1.1.jar
>> :/opt/hadoop-latest/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-pkix-1.0.1.ja
>> r:/opt/hadoop-latest/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-map
>> per-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator
>> -recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/sha
>> re/hadoop/common/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/h
>> adoop-latest/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-
>> annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/common
>> /lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/common/li
>> b/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsc
>> h-0.1.54.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2
>> -tests.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-kms-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs:/opt/hadoop-latest/share/hadoop
>> /hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/common
>> s-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/li
>> b/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-ut
>> il-ajax-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htt
>> pcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hado
>> op/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop
>> /hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-latest/share/hadoop/hdf
>> s/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jer
>> sey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-crypto-1.0.1
>> .jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-
>> latest/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/h
>> adoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/h
>> dfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/h
>> dfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/h
>> adoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-lat
>> est/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt
>> /hadoop-latest/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json
>> -simple-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/token-
>> provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/zo
>> okeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-codec-1.11.ja
>> r:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-latest/share/ha
>> doop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/opt/hadoop-latest/
>> share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/opt/hadoop
>> -latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-
>> latest/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/h
>> adoop-mapreduce-client-uploader-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-m
>> apreduce-client-jobclient-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce
>> -client-nativetask-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn:/opt/hadoop-latest/share/hadoop/y
>> arn/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-latest/share/hadoop/
>> yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-latest/share/hadoop/yarn
>> /lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-latest/share/hadoop
>> /yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-latest/share/hadoop/yarn
>> /lib/guice-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/d
>> nsjava-2.1.7.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar:/opt/hadoop-lat
>> est/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-serv
>> er-nodemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/opt/hadoop-la
>> test/share/hadoop/yarn/hadoop-yarn-server-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/opt/hadoop-lates
>> t/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server
>> -router-3.1.2.jar 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   build = https://github.com/apache/hadoop.git <https://github.com/apache/hadoop.git> -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on 2019-01-29T01:39Z 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   java = 1.8.0_202 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | ************************************************************/ 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:54,115 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:54,192 INFO namenode.NameNode: createNameNode [-format, -nonInteractive] 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,034 INFO security.UserGroupInformation: Login successful for user root/hdfs-namenode1@HADOOP using keytab file /etc/krb5.keytab 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,110 INFO common.Util: Assuming 'file' scheme for path /opt/hadoop-latest/data in configuration. 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,111 INFO common.Util: Assuming 'file' scheme for path /opt/hadoop-latest/data in configuration. 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | Formatting using clusterid: CID-ad09dcfb-0152-4ded-9f72-2be4dfd729b8 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,141 INFO namenode.FSEditLog: Edit logging is async:true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,152 INFO namenode.FSNamesystem: KeyProvider: null 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,153 INFO namenode.FSNamesystem: fsLock is fair: true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,154 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,154 INFO namenode.FSNamesystem: fsOwner             = root/hdfs-namenode1@HADOOP (auth:KERBEROS) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,154 INFO namenode.FSNamesystem: supergroup          = supergroup 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,155 INFO namenode.FSNamesystem: isPermissionEnabled = true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,156 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster remains unresolved for ID secondary. Check your hdfs-site.xml file to ensure namenodes are configured properly. 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,156 INFO namenode.FSNamesystem: Determined nameservice ID: hadoop-hdfs-cluster 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,156 INFO namenode.FSNamesystem: HA Enabled: true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,192 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,203 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,203 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=false 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,206 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,206 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Apr 30 11:17:55 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,207 INFO util.GSet: Computing capacity for map BlocksMap 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,207 INFO util.GSet: VM type       = 64-bit 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,208 INFO util.GSet: 2.0% max memory 3.5 GB = 70.8 MB 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,208 INFO util.GSet: capacity      = 2^23 = 8388608 entries 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,226 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,226 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,226 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster remains unresolved for ID secondary. Check your hdfs-site.xml file to ensure namenodes are configured properly. 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: defaultReplication         = 3 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: maxReplication             = 512 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: minReplication             = 1 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,240 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,240 INFO blockmanagement.BlockManager: encryptDataTransfer        = false 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,240 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,257 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: Computing capacity for map INodeMap 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: VM type       = 64-bit 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: 1.0% max memory 3.5 GB = 35.4 MB 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: capacity      = 2^22 = 4194304 entries 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.FSDirectory: ACLs enabled? false 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.FSDirectory: XAttrs enabled? true 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.NameNode: Caching file names occurring more than 10 times 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,274 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,276 INFO snapshot.SnapshotManager: SkipList is disabled 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: Computing capacity for map cachedBlocks 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: VM type       = 64-bit 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: 0.25% max memory 3.5 GB = 8.8 MB 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: capacity      = 2^20 = 1048576 entries 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,318 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,318 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,319 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,322 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,322 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: Computing capacity for map NameNodeRetryCache 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: VM type       = 64-bit 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: 0.029999999329447746% max memory 3.5 GB = 1.1 MB 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: capacity      = 2^17 = 131072 entries 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,347 WARN client.QuorumJournalManager: Quorum journal URI 'qjournal://hdfs-journalnode1:8485;hdfs-journalnode2:8485;hdfs-journalnode3:8485;hdfs-journalnode4:8485;hdfs-journalnode5:8485;hdfs-jou <>
>> rnalnode6:8485;hdfs-journalnode7:8485;hdfs-journalnode8:8485/hadoop-hdfs-cluster' has an even number of Journal Nodes specified. This is not recommended! 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,653 WARN namenode.NameNode: Encountered exception during format:  
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 7 exceptions thrown: 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485 <http://10.0.0.236:8485/>: DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485 <http://10.0.0.252:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485 <http://10.0.0.250:8485/>: DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485 <http://10.0.0.240:8485/>: DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485 <http://10.0.0.232:8485/>: DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485 <http://10.0.0.244:8485/>: DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485 <http://10.0.0.238:8485/>: DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,657 ERROR namenode.NameNode: Failed to start namenode. 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 7 exceptions thrown: 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485 <http://10.0.0.236:8485/>: DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485 <http://10.0.0.252:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485 <http://10.0.0.250:8485/>: DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485 <http://10.0.0.240:8485/>: DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485 <http://10.0.0.232:8485/>: DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485 <http://10.0.0.244:8485/>: DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485 <http://10.0.0.238:8485/>: DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,658 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 7 exceptions thrown: 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485 <http://10.0.0.236:8485/>: DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485 <http://10.0.0.252:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485 <http://10.0.0.250:8485/>: DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485 <http://10.0.0.240:8485/>: DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485 <http://10.0.0.232:8485/>: DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485 <http://10.0.0.244:8485/>: DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485 <http://10.0.0.238:8485/>: DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,660 INFO namenode.NameNode: SHUTDOWN_MSG:  
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | /************************************************************ 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | SHUTDOWN_MSG: Shutting down NameNode at hdfs-namenode1/10.0.0.60 <http://10.0.0.60/> 
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | ************************************************************/
>> 
> 


Re: Issue formatting Namenode in HA cluster using Kerberos

Posted by Arpit Agarwal <aa...@cloudera.com.INVALID>.
Yeah it looks like the JournalNode sees the inbound connection from 10.0.0.4, not 10.0.0.66. Perhaps some Docker expert can chime in.

You can try host-less principals for now  i.e. no _HOST pattern in principal names. Not recommended for serious deployments of course.

Few other comments:

1. I would start with just 3 JNs.
2. It is better to start HDFS services and run the format command as the ‘hdfs’ user, not the root user. HDFS does not care about the user name, but using ‘hdfs’ will make your configuration simpler.
3. Make sure the /etc/hosts file is present on all nodes.


I feel your pain because first time secure cluster setup is difficult and Docker seems to be creating additional complication.


> On May 6, 2019, at 12:04 PM, Adam Jorgensen <ad...@spandigital.com> wrote:
> 
> Okay, I'm now trying to mimic working Reverse DNS by explicitly configuring entries in my /etc/hosts file.
> 
> So, for example, on my namenode /etc/hosts contains:
> 
> 10.0.0.66 hdfs-namenode1 
> 10.0.0.49 hdfs-journalnode1 
> 10.0.0.67 hdfs-journalnode2 
> 10.0.0.47 hdfs-journalnode3 
> 10.0.0.53 hdfs-journalnode4 
> 10.0.0.61 hdfs-journalnode5 
> 10.0.0.59 hdfs-journalnode6 
> 10.0.0.51 hdfs-journalnode7 
> 10.0.0.57 hdfs-journalnode8
> 
> The journal nodes contain the same definitions.
> 
> Now, when I attempt to run namenode -format I get the following output:
> bash-4.4# bin/hdfs namenode -format 
> 2019-05-06 18:51:05,797 INFO namenode.NameNode: STARTUP_MSG:  
> /************************************************************ 
> STARTUP_MSG: Starting NameNode 
> STARTUP_MSG:   host = hdfs-namenode1/10.0.0.66 <http://10.0.0.66/> 
> STARTUP_MSG:   args = [-format] 
> STARTUP_MSG:   version = 3.1.2 
> STARTUP_MSG:   classpath = /opt/hadoop-latest/etc/hadoop:/opt/hadoop-latest/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/h
> adoop-latest/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-util-9.3.24.v20180
> 605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-compres
> s-1.18.jar:/opt/hadoop-latest/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/accessors-smart-1.2.
> jar:/opt/hadoop-latest/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-s
> erver-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/commo
> n/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/c
> ommon/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest
> /share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/had
> oop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/co
> mmon/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/k
> erb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-a
> pi-2.2.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commo
> ns-net-3.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop
> /common/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoo
> p/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-auth-3.1.2.jar:/opt/ha
> doop-latest/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-securit
> y-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/had
> oop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop
> -latest/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/s
> hare/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/commo
> n/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/common/
> hadoop-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-kms-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-late
> st/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/hd
> fs/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/shar
> e/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/opt/hadoop-latest/sh
> are/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/hd
> fs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-latest/s
> hare/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-late
> st/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/opt/hadoop-lat
> est/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/
> hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb
> y-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/
> opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-la
> test/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/sh
> are/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-
> latest/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/had
> oop-latest/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20
> 180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-http-9.
> 3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop-latest/share/hadoop/
> hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop
> /hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop
> /hdfs/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons
> -io-2.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-
> latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/op
> t/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.
> jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/hamcrest-core-1.
> 3.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.2.jar:/opt/had
> oop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2.jar:/opt/hadoop-la
> test/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.2.jar:/opt/hadoop-latest/sh
> are/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn:/opt/hadoop-latest/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/
> share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-latest/
> share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/ha
> doop-latest/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-latest
> /share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/guice-4.0.jar:/opt/hadoop-latest/share/hadoop
> /yarn/lib/objenesis-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/opt/hadoop-latest/share/hadoop/y
> arn/hadoop-yarn-services-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-dis
> tributedshell-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.2.jar:/opt/hadoop-latest/share/
> hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-common-3
> .1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3
> .1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-router-3.1.2.jar 
> STARTUP_MSG:   build = https://github.com/apache/hadoop.git <https://github.com/apache/hadoop.git> -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on 2019-01-29T01:39Z 
> STARTUP_MSG:   java = 1.8.0_202 
> ************************************************************/ 
> 2019-05-06 18:51:05,806 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
> 2019-05-06 18:51:05,886 INFO namenode.NameNode: createNameNode [-format] 
> 2019-05-06 18:51:06,505 INFO security.UserGroupInformation: Login successful for user root/hdfs-namenode1@HADOOP using keytab file /etc/krb5.keytab 
> 2019-05-06 18:51:06,592 INFO common.Util: Assuming 'file' scheme for path /opt/hadoop-latest/data in configuration. 
> 2019-05-06 18:51:06,593 INFO common.Util: Assuming 'file' scheme for path /opt/hadoop-latest/data in configuration. 
> Formatting using clusterid: CID-8b0b34be-fd5f-471c-825b-4917845bb9cb 
> 2019-05-06 18:51:06,633 INFO namenode.FSEditLog: Edit logging is async:true 
> 2019-05-06 18:51:06,650 INFO namenode.FSNamesystem: KeyProvider: null 
> 2019-05-06 18:51:06,652 INFO namenode.FSNamesystem: fsLock is fair: true 
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: fsOwner             = root/hdfs-namenode1@HADOOP (auth:KERBEROS) 
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: supergroup          = supergroup 
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: isPermissionEnabled = true 
> 2019-05-06 18:51:06,655 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster remains unresolved for ID secondary. Check your hdfs-site.xml file to ensure namenodes are configured properly. 
> 2019-05-06 18:51:06,655 INFO namenode.FSNamesystem: Determined nameservice ID: hadoop-hdfs-cluster 
> 2019-05-06 18:51:06,655 INFO namenode.FSNamesystem: HA Enabled: true 
> 2019-05-06 18:51:06,712 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 
> 2019-05-06 18:51:06,725 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 
> 2019-05-06 18:51:06,725 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=false 
> 2019-05-06 18:51:06,729 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 
> 2019-05-06 18:51:06,729 INFO blockmanagement.BlockManager: The block deletion will start around 2019 May 06 18:51:06 
> 2019-05-06 18:51:06,731 INFO util.GSet: Computing capacity for map BlocksMap 
> 2019-05-06 18:51:06,731 INFO util.GSet: VM type       = 64-bit 
> 2019-05-06 18:51:06,732 INFO util.GSet: 2.0% max memory 3.5 GB = 70.8 MB 
> 2019-05-06 18:51:06,732 INFO util.GSet: capacity      = 2^23 = 8388608 entries 
> 2019-05-06 18:51:06,753 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = true 
> 2019-05-06 18:51:06,753 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null 
> 2019-05-06 18:51:06,754 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster remains unresolved for ID secondary. Check your hdfs-site.xml file to ensure namenodes are configured properly. 
> 2019-05-06 18:51:06,767 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS 
> 2019-05-06 18:51:06,767 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager: defaultReplication         = 3 
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager: maxReplication             = 512 
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager: minReplication             = 1 
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2 
> 2019-05-06 18:51:06,772 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms 
> 2019-05-06 18:51:06,772 INFO blockmanagement.BlockManager: encryptDataTransfer        = false 
> 2019-05-06 18:51:06,772 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000 
> 2019-05-06 18:51:06,796 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215 
> 2019-05-06 18:51:06,815 INFO util.GSet: Computing capacity for map INodeMap 
> 2019-05-06 18:51:06,815 INFO util.GSet: VM type       = 64-bit 
> 2019-05-06 18:51:06,815 INFO util.GSet: 1.0% max memory 3.5 GB = 35.4 MB 
> 2019-05-06 18:51:06,815 INFO util.GSet: capacity      = 2^22 = 4194304 entries 
> 2019-05-06 18:51:06,817 INFO namenode.FSDirectory: ACLs enabled? false 
> 2019-05-06 18:51:06,817 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 
> 2019-05-06 18:51:06,817 INFO namenode.FSDirectory: XAttrs enabled? true 
> 2019-05-06 18:51:06,818 INFO namenode.NameNode: Caching file names occurring more than 10 times 
> 2019-05-06 18:51:06,824 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 
> 2019-05-06 18:51:06,827 INFO snapshot.SnapshotManager: SkipList is disabled 
> 2019-05-06 18:51:06,832 INFO util.GSet: Computing capacity for map cachedBlocks 
> 2019-05-06 18:51:06,832 INFO util.GSet: VM type       = 64-bit 
> 2019-05-06 18:51:06,833 INFO util.GSet: 0.25% max memory 3.5 GB = 8.8 MB 
> 2019-05-06 18:51:06,833 INFO util.GSet: capacity      = 2^20 = 1048576 entries 
> 2019-05-06 18:51:06,892 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 
> 2019-05-06 18:51:06,892 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 
> 2019-05-06 18:51:06,892 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 
> 2019-05-06 18:51:06,896 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 
> 2019-05-06 18:51:06,897 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
> 2019-05-06 18:51:06,899 INFO util.GSet: Computing capacity for map NameNodeRetryCache 
> 2019-05-06 18:51:06,899 INFO util.GSet: VM type       = 64-bit 
> 2019-05-06 18:51:06,899 INFO util.GSet: 0.029999999329447746% max memory 3.5 GB = 1.1 MB 
> 2019-05-06 18:51:06,899 INFO util.GSet: capacity      = 2^17 = 131072 entries 
> 2019-05-06 18:51:06,931 WARN client.QuorumJournalManager: Quorum journal URI 'qjournal://hdfs-journalnode1:8485;hdfs-journalnode2:8485;hdfs-journalnode3:8485;hdfs-journalnode4:8485;hdfs-journalnode5:8485;hdfs-journalnode6:8485;hdfs-journalnode7:8485;hdfs-journal
> node8:8485/hadoop-hdfs-cluster' has an even number of Journal Nodes specified. This is not recommended! 
> 2019-05-06 18:51:07,380 WARN namenode.NameNode: Encountered exception during format:  
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown: 
> 10.0.0.51:8485 <http://10.0.0.51:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 10.0.0.59:8485 <http://10.0.0.59:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 10.0.0.57:8485 <http://10.0.0.57:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
>        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
>        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) 
>        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253) 
>        at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142) 
>        at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196) 
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155) 
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600) 
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 
> 2019-05-06 18:51:07,529 ERROR namenode.NameNode: Failed to start namenode. 
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown: 
> 10.0.0.51:8485 <http://10.0.0.51:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 10.0.0.59:8485 <http://10.0.0.59:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 10.0.0.57:8485 <http://10.0.0.57:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
>        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
>        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) 
>        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253) 
>        at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142) 
>        at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196) 
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155) 
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600) 
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 
> 2019-05-06 18:51:07,534 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown: 
> 10.0.0.51:8485 <http://10.0.0.51:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 10.0.0.59:8485 <http://10.0.0.59:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 10.0.0.57:8485 <http://10.0.0.57:8485/>: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is only accessible by root/10.0.0.4@HADOOP 
> 2019-05-06 18:51:07,539 INFO namenode.NameNode: SHUTDOWN_MSG:  
> /************************************************************ 
> SHUTDOWN_MSG: Shutting down NameNode at hdfs-namenode1/10.0.0.66 <http://10.0.0.66/> 
> ************************************************************/
> 
> 
> This is quite peculiar because a docker network inspect MYNETWORKNAME reveals that 10.0.0.4 is some odd internal gateway IP used by Docket:
> 
>             "lb-adss_default": { 
>                "Name": "MYNETWORKNAME_default-endpoint", 
>                "EndpointID": "ee95098d7b5d7349e7effd2eabf1a421e08b2298e190d9b4d244c6e1fd409183", 
>                "MacAddress": "02:42:0a:00:00:04", 
>                "IPv4Address": "10.0.0.4/24 <http://10.0.0.4/24>", 
>                "IPv6Address": "" 
>            }
> 
> I've tried adding a principal for root/10.0.0.4@HADOOP and added it to the keytab being used by my namenode but no luck. 
> 
> On Fri, May 3, 2019 at 9:11 PM Adam Jorgensen <adam.jorgensen@spandigital.com <ma...@spandigital.com>> wrote:
> Hmmmmmmmmmmm, this has gotten weird. Thinking the issue was due to the issue with reverse DNS not working inside Docker containers I did a bunch of rejiggering things such that now each of my contains get's configured with principals and keytabs that reflect their actual IP when they're running.
> 
> Unfortunately it's still failing to format the namenode. A sample error:
> 
> 10.0.0.51:8485 <http://10.0.0.51:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.54:0 <http://10.0.0.54:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: root
> /10.0.0.52@HADOOP, expecting: root/10.0.0.54@HADOOP
> 
> In this scenario 10.0.0.54 is the IP of the Namenode which is attempting to format the Journalnodes.  10.0.0.52 is the IP of the hdfs-journalnode6 service. 
> 
> Why is the Namenode expecting the Journalnode to be configured with the Namenodes principal? That seems a bit weird to me? Or am I just misunderstanding Kerberos and Hadoop dramatically?
> 
> On Thu, May 2, 2019 at 4:45 PM Arpit Agarwal <aagarwal@cloudera.com <ma...@cloudera.com>> wrote:
> You can use /etc/hosts entries as a workaround.
> 
> If this is a PoC/test environment, a less secure workaround is host-less principals. i.e. omit the _HOST pattern. Not usually recommended since service instances will use the same principal and it may be easier to impersonate a service if the keytab is compromised.
> 
> 
>> On Apr 30, 2019, at 1:11 PM, Adam Jorgensen <adam.jorgensen@spandigital.com <ma...@spandigital.com>> wrote:
>> 
>> Ahhhhh....reverse DNS you say............oh dear
>> 
>> As per https://github.com/docker/for-linux/issues/365 <https://github.com/docker/for-linux/issues/365> it seems that Reverse DNS has been rather broken in Docker since early 2018 :-(
>> 
>> I'm going to have to do some digging to see if I can find some way to fix this I guess, since the other option is a painful dance involving registering Kerberos principals for the specific IPs
>> 
>> On Tue, Apr 30, 2019 at 9:56 PM Arpit Agarwal <aagarwal@cloudera.com <ma...@cloudera.com>> wrote:
>> Likely your reverse DNS is not configured properly. You can check it by running ‘dig -x 10.0.0.238’.
>> 
>> 
>>> On Apr 30, 2019, at 12:24 PM, Adam Jorgensen <adam.jorgensen@spandigital.com <ma...@spandigital.com>> wrote:
>>> 
>>> Hi all, my first post here. I'm looking for some help with an issue I'm having attempting to format my Namenode. I'm running a HA configuration and have configured Kerberos for authentication. 
>>> Additionally, I am running Hadoop using Docker Swarm.
>>> 
>>> The issue I'm having is that when I attempt to format the Namenode the operation fails with complaints that QJM Journalnode do not have a valid Kerberos principal. However, the issue is more specific in that it seems like the operation to format the Journalnodes attempts to use a Kerberos principal of the form SERVICE/IP@REALM whereas the principals I have configured use the hostname rather than the IP.
>>> 
>>> If you take a look at the logging output captured below you will get a better idea of what the issue is.
>>> 
>>> Has anyone run into this before? Is there a way I can tell the Namenode format to use the correct principals?
>>> 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:54,107 INFO namenode.NameNode: STARTUP_MSG:  
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | /************************************************************ 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG: Starting NameNode 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   host = hdfs-namenode1/10.0.0.60 <http://10.0.0.60/> 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   args = [-format, -nonInteractive] 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   version = 3.1.2 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   classpath = /opt/hadoop-latest/etc/hadoop:/opt/hadoop-latest/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/sha
>>> re/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-lates
>>> t/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/had
>>> oop-latest/share/hadoop/common/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-l
>>> atest/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:
>>> /opt/hadoop-latest/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-databi
>>> nd-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/js
>>> on-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/co
>>> mmon/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/
>>> kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby
>>> -asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/stax2-api-3.1.
>>> 4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr311-api-1.1.1.jar
>>> :/opt/hadoop-latest/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-pkix-1.0.1.ja
>>> r:/opt/hadoop-latest/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-map
>>> per-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator
>>> -recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/sha
>>> re/hadoop/common/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/h
>>> adoop-latest/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-
>>> annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/common
>>> /lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/common/li
>>> b/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsc
>>> h-0.1.54.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2
>>> -tests.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-kms-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs:/opt/hadoop-latest/share/hadoop
>>> /hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/common
>>> s-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/li
>>> b/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-ut
>>> il-ajax-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htt
>>> pcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hado
>>> op/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop
>>> /hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-latest/share/hadoop/hdf
>>> s/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jer
>>> sey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-crypto-1.0.1
>>> .jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-
>>> latest/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/h
>>> adoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/h
>>> dfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/h
>>> dfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/h
>>> adoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-lat
>>> est/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt
>>> /hadoop-latest/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json
>>> -simple-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/token-
>>> provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/zo
>>> okeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-codec-1.11.ja
>>> r:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-latest/share/ha
>>> doop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/opt/hadoop-latest/
>>> share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/opt/hadoop
>>> -latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-
>>> latest/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/h
>>> adoop-mapreduce-client-uploader-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-m
>>> apreduce-client-jobclient-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce
>>> -client-nativetask-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn:/opt/hadoop-latest/share/hadoop/y
>>> arn/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-latest/share/hadoop/
>>> yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-latest/share/hadoop/yarn
>>> /lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-latest/share/hadoop
>>> /yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-latest/share/hadoop/yarn
>>> /lib/guice-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/d
>>> nsjava-2.1.7.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar:/opt/hadoop-lat
>>> est/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-serv
>>> er-nodemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/opt/hadoop-la
>>> test/share/hadoop/yarn/hadoop-yarn-server-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/opt/hadoop-lates
>>> t/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server
>>> -router-3.1.2.jar 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   build = https://github.com/apache/hadoop.git <https://github.com/apache/hadoop.git> -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on 2019-01-29T01:39Z 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   java = 1.8.0_202 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | ************************************************************/ 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:54,115 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:54,192 INFO namenode.NameNode: createNameNode [-format, -nonInteractive] 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,034 INFO security.UserGroupInformation: Login successful for user root/hdfs-namenode1@HADOOP using keytab file /etc/krb5.keytab 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,110 INFO common.Util: Assuming 'file' scheme for path /opt/hadoop-latest/data in configuration. 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,111 INFO common.Util: Assuming 'file' scheme for path /opt/hadoop-latest/data in configuration. 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | Formatting using clusterid: CID-ad09dcfb-0152-4ded-9f72-2be4dfd729b8 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,141 INFO namenode.FSEditLog: Edit logging is async:true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,152 INFO namenode.FSNamesystem: KeyProvider: null 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,153 INFO namenode.FSNamesystem: fsLock is fair: true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,154 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,154 INFO namenode.FSNamesystem: fsOwner             = root/hdfs-namenode1@HADOOP (auth:KERBEROS) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,154 INFO namenode.FSNamesystem: supergroup          = supergroup 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,155 INFO namenode.FSNamesystem: isPermissionEnabled = true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,156 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster remains unresolved for ID secondary. Check your hdfs-site.xml file to ensure namenodes are configured properly. 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,156 INFO namenode.FSNamesystem: Determined nameservice ID: hadoop-hdfs-cluster 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,156 INFO namenode.FSNamesystem: HA Enabled: true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,192 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,203 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,203 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=false 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,206 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,206 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Apr 30 11:17:55 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,207 INFO util.GSet: Computing capacity for map BlocksMap 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,207 INFO util.GSet: VM type       = 64-bit 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,208 INFO util.GSet: 2.0% max memory 3.5 GB = 70.8 MB 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,208 INFO util.GSet: capacity      = 2^23 = 8388608 entries 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,226 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,226 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,226 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster remains unresolved for ID secondary. Check your hdfs-site.xml file to ensure namenodes are configured properly. 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: defaultReplication         = 3 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: maxReplication             = 512 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: minReplication             = 1 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,237 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,240 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,240 INFO blockmanagement.BlockManager: encryptDataTransfer        = false 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,240 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,257 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: Computing capacity for map INodeMap 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: VM type       = 64-bit 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: 1.0% max memory 3.5 GB = 35.4 MB 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,269 INFO util.GSet: capacity      = 2^22 = 4194304 entries 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.FSDirectory: ACLs enabled? false 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.FSDirectory: XAttrs enabled? true 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,270 INFO namenode.NameNode: Caching file names occurring more than 10 times 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,274 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,276 INFO snapshot.SnapshotManager: SkipList is disabled 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: Computing capacity for map cachedBlocks 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: VM type       = 64-bit 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: 0.25% max memory 3.5 GB = 8.8 MB 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,279 INFO util.GSet: capacity      = 2^20 = 1048576 entries 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,318 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,318 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,319 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,322 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,322 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: Computing capacity for map NameNodeRetryCache 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: VM type       = 64-bit 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: 0.029999999329447746% max memory 3.5 GB = 1.1 MB 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,323 INFO util.GSet: capacity      = 2^17 = 131072 entries 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,347 WARN client.QuorumJournalManager: Quorum journal URI 'qjournal://hdfs-journalnode1:8485;hdfs-journalnode2:8485;hdfs-journalnode3:8485;hdfs-journalnode4:8485;hdfs-journalnode5:8485;hdfs-jou <>
>>> rnalnode6:8485;hdfs-journalnode7:8485;hdfs-journalnode8:8485/hadoop-hdfs-cluster' has an even number of Journal Nodes specified. This is not recommended! 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,653 WARN namenode.NameNode: Encountered exception during format:  
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 7 exceptions thrown: 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485 <http://10.0.0.236:8485/>: DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485 <http://10.0.0.252:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485 <http://10.0.0.250:8485/>: DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485 <http://10.0.0.240:8485/>: DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485 <http://10.0.0.232:8485/>: DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485 <http://10.0.0.244:8485/>: DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485 <http://10.0.0.238:8485/>: DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,657 ERROR namenode.NameNode: Failed to start namenode. 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 7 exceptions thrown: 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485 <http://10.0.0.236:8485/>: DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485 <http://10.0.0.252:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485 <http://10.0.0.250:8485/>: DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485 <http://10.0.0.240:8485/>: DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485 <http://10.0.0.232:8485/>: DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485 <http://10.0.0.244:8485/>: DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485 <http://10.0.0.238:8485/>: DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,658 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 7 exceptions thrown: 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485 <http://10.0.0.236:8485/>: DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485 <http://10.0.0.252:8485/>: DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485 <http://10.0.0.250:8485/>: DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485 <http://10.0.0.240:8485/>: DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485 <http://10.0.0.232:8485/>: DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485 <http://10.0.0.244:8485/>: DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485 <http://10.0.0.238:8485/>: DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort hdfs-namenode1/10.0.0.60:0 <http://10.0.0.60:0/>. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal: root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30 11:17:55,660 INFO namenode.NameNode: SHUTDOWN_MSG:  
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | /************************************************************ 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | SHUTDOWN_MSG: Shutting down NameNode at hdfs-namenode1/10.0.0.60 <http://10.0.0.60/> 
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | ************************************************************/
>>> 
>> 
> 


Re: Issue formatting Namenode in HA cluster using Kerberos

Posted by Adam Jorgensen <ad...@spandigital.com>.
Okay, I'm now trying to mimic working Reverse DNS by explicitly configuring
entries in my /etc/hosts file.

So, for example, on my namenode /etc/hosts contains:

10.0.0.66 hdfs-namenode1
> 10.0.0.49 hdfs-journalnode1
> 10.0.0.67 hdfs-journalnode2
> 10.0.0.47 hdfs-journalnode3
> 10.0.0.53 hdfs-journalnode4
> 10.0.0.61 hdfs-journalnode5
> 10.0.0.59 hdfs-journalnode6
> 10.0.0.51 hdfs-journalnode7
> 10.0.0.57 hdfs-journalnode8
>

The journal nodes contain the same definitions.

Now, when I attempt to run *namenode -format* I get the following output:

> bash-4.4# bin/hdfs namenode -format
> 2019-05-06 18:51:05,797 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = hdfs-namenode1/10.0.0.66
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 3.1.2
> STARTUP_MSG:   classpath =
> /opt/hadoop-latest/etc/hadoop:/opt/hadoop-latest/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/h
>
> adoop-latest/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-util-9.3.24.v20180
>
> 605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-compres
>
> s-1.18.jar:/opt/hadoop-latest/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/accessors-smart-1.2.
>
> jar:/opt/hadoop-latest/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-s
>
> erver-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/commo
>
> n/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/c
>
> ommon/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest
>
> /share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/had
>
> oop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/co
>
> mmon/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/k
>
> erb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-a
>
> pi-2.2.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commo
>
> ns-net-3.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop
>
> /common/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoo
>
> p/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-auth-3.1.2.jar:/opt/ha
>
> doop-latest/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-securit
>
> y-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/had
>
> oop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop
>
> -latest/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/s
>
> hare/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/commo
>
> n/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/common/
>
> hadoop-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-kms-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-late
>
> st/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/hd
>
> fs/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/shar
>
> e/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/opt/hadoop-latest/sh
>
> are/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/hd
>
> fs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-latest/s
>
> hare/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-late
>
> st/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/opt/hadoop-lat
>
> est/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/
>
> hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb
>
> y-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/
>
> opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-la
>
> test/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/sh
>
> are/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-
>
> latest/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/had
>
> oop-latest/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20
>
> 180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-http-9.
>
> 3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop-latest/share/hadoop/
>
> hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop
>
> /hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop
>
> /hdfs/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons
>
> -io-2.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-
>
> latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/op
>
> t/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.
>
> jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/hamcrest-core-1.
>
> 3.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.2.jar:/opt/had
>
> oop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2.jar:/opt/hadoop-la
>
> test/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.2.jar:/opt/hadoop-latest/sh
>
> are/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn:/opt/hadoop-latest/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/
>
> share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-latest/
>
> share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/ha
>
> doop-latest/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-latest
>
> /share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/guice-4.0.jar:/opt/hadoop-latest/share/hadoop
>
> /yarn/lib/objenesis-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/opt/hadoop-latest/share/hadoop/y
>
> arn/hadoop-yarn-services-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-dis
>
> tributedshell-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.2.jar:/opt/hadoop-latest/share/
>
> hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-common-3
>
> .1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3
> .1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-router-3.1.2.jar
>
> STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r
> 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on
> 2019-01-29T01:39Z
> STARTUP_MSG:   java = 1.8.0_202
> ************************************************************/
> 2019-05-06 18:51:05,806 INFO namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2019-05-06 18:51:05,886 INFO namenode.NameNode: createNameNode [-format]
> 2019-05-06 18:51:06,505 INFO security.UserGroupInformation: Login
> successful for user root/hdfs-namenode1@HADOOP using keytab file
> /etc/krb5.keytab
> 2019-05-06 18:51:06,592 INFO common.Util: Assuming 'file' scheme for path
> /opt/hadoop-latest/data in configuration.
> 2019-05-06 18:51:06,593 INFO common.Util: Assuming 'file' scheme for path
> /opt/hadoop-latest/data in configuration.
> Formatting using clusterid: CID-8b0b34be-fd5f-471c-825b-4917845bb9cb
> 2019-05-06 18:51:06,633 INFO namenode.FSEditLog: Edit logging is
> async:true
> 2019-05-06 18:51:06,650 INFO namenode.FSNamesystem: KeyProvider: null
> 2019-05-06 18:51:06,652 INFO namenode.FSNamesystem: fsLock is fair: true
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: Detailed lock hold
> time metrics enabled: false
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: fsOwner             =
> root/hdfs-namenode1@HADOOP (auth:KERBEROS)
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: supergroup          =
> supergroup
> 2019-05-06 18:51:06,653 INFO namenode.FSNamesystem: isPermissionEnabled =
> true
> 2019-05-06 18:51:06,655 WARN hdfs.DFSUtilClient: Namenode for
> hadoop-hdfs-cluster remains unresolved for ID secondary. Check your
> hdfs-site.xml file to ensure namenodes are configured properly.
> 2019-05-06 18:51:06,655 INFO namenode.FSNamesystem: Determined nameservice
> ID: hadoop-hdfs-cluster
> 2019-05-06 18:51:06,655 INFO namenode.FSNamesystem: HA Enabled: true
> 2019-05-06 18:51:06,712 INFO common.Util:
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file
> IO profiling
> 2019-05-06 18:51:06,725 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
> 2019-05-06 18:51:06,725 INFO blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=false
> 2019-05-06 18:51:06,729 INFO blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2019-05-06 18:51:06,729 INFO blockmanagement.BlockManager: The block
> deletion will start around 2019 May 06 18:51:06
> 2019-05-06 18:51:06,731 INFO util.GSet: Computing capacity for map
> BlocksMap
> 2019-05-06 18:51:06,731 INFO util.GSet: VM type       = 64-bit
> 2019-05-06 18:51:06,732 INFO util.GSet: 2.0% max memory 3.5 GB = 70.8 MB
> 2019-05-06 18:51:06,732 INFO util.GSet: capacity      = 2^23 = 8388608
> entries
> 2019-05-06 18:51:06,753 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable = true
> 2019-05-06 18:51:06,753 INFO blockmanagement.BlockManager:
> dfs.block.access.key.update.interval=600 min(s),
> dfs.block.access.token.lifetime=600 min(s),
> dfs.encrypt.data.transfer.algorithm=null
> 2019-05-06 18:51:06,754 WARN hdfs.DFSUtilClient: Namenode for
> hadoop-hdfs-cluster remains unresolved for ID secondary. Check your
> hdfs-site.xml file to ensure namenodes are configured properly.
> 2019-05-06 18:51:06,767 INFO Configuration.deprecation: No unit for
> dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
> 2019-05-06 18:51:06,767 INFO blockmanagement.BlockManagerSafeMode:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManagerSafeMode:
> dfs.namenode.safemode.min.datanodes = 0
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManagerSafeMode:
> dfs.namenode.safemode.extension = 30000
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager:
> defaultReplication         = 3
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager: maxReplication
>             = 512
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager: minReplication
>             = 1
> 2019-05-06 18:51:06,768 INFO blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2019-05-06 18:51:06,772 INFO blockmanagement.BlockManager:
> redundancyRecheckInterval  = 3000ms
> 2019-05-06 18:51:06,772 INFO blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2019-05-06 18:51:06,772 INFO blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2019-05-06 18:51:06,796 INFO namenode.FSDirectory: GLOBAL serial map:
> bits=24 maxEntries=16777215
> 2019-05-06 18:51:06,815 INFO util.GSet: Computing capacity for map
> INodeMap
> 2019-05-06 18:51:06,815 INFO util.GSet: VM type       = 64-bit
> 2019-05-06 18:51:06,815 INFO util.GSet: 1.0% max memory 3.5 GB = 35.4 MB
> 2019-05-06 18:51:06,815 INFO util.GSet: capacity      = 2^22 = 4194304
> entries
> 2019-05-06 18:51:06,817 INFO namenode.FSDirectory: ACLs enabled? false
> 2019-05-06 18:51:06,817 INFO namenode.FSDirectory: POSIX ACL inheritance
> enabled? true
> 2019-05-06 18:51:06,817 INFO namenode.FSDirectory: XAttrs enabled? true
> 2019-05-06 18:51:06,818 INFO namenode.NameNode: Caching file names
> occurring more than 10 times
> 2019-05-06 18:51:06,824 INFO snapshot.SnapshotManager: Loaded config
> captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false,
> snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
> 2019-05-06 18:51:06,827 INFO snapshot.SnapshotManager: SkipList is
> disabled
> 2019-05-06 18:51:06,832 INFO util.GSet: Computing capacity for map
> cachedBlocks
> 2019-05-06 18:51:06,832 INFO util.GSet: VM type       = 64-bit
> 2019-05-06 18:51:06,833 INFO util.GSet: 0.25% max memory 3.5 GB = 8.8 MB
> 2019-05-06 18:51:06,833 INFO util.GSet: capacity      = 2^20 = 1048576
> entries
> 2019-05-06 18:51:06,892 INFO metrics.TopMetrics: NNTop conf:
> dfs.namenode.top.window.num.buckets = 10
> 2019-05-06 18:51:06,892 INFO metrics.TopMetrics: NNTop conf:
> dfs.namenode.top.num.users = 10
> 2019-05-06 18:51:06,892 INFO metrics.TopMetrics: NNTop conf:
> dfs.namenode.top.windows.minutes = 1,5,25
> 2019-05-06 18:51:06,896 INFO namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2019-05-06 18:51:06,897 INFO namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2019-05-06 18:51:06,899 INFO util.GSet: Computing capacity for map
> NameNodeRetryCache
> 2019-05-06 18:51:06,899 INFO util.GSet: VM type       = 64-bit
> 2019-05-06 18:51:06,899 INFO util.GSet: 0.029999999329447746% max memory
> 3.5 GB = 1.1 MB
> 2019-05-06 18:51:06,899 INFO util.GSet: capacity      = 2^17 = 131072
> entries
> 2019-05-06 18:51:06,931 WARN client.QuorumJournalManager: Quorum journal
> URI
> 'qjournal://hdfs-journalnode1:8485;hdfs-journalnode2:8485;hdfs-journalnode3:8485;hdfs-journalnode4:8485;hdfs-journalnode5:8485;hdfs-journalnode6:8485;hdfs-journalnode7:8485;hdfs-journal
> node8:8485/hadoop-hdfs-cluster' has an even number of Journal Nodes
> specified. This is not recommended!
> 2019-05-06 18:51:07,380 WARN namenode.NameNode: Encountered exception
> during format:
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
> JNs are ready for formatting. 3 exceptions thrown:
> 10.0.0.51:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 10.0.0.59:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 10.0.0.57:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
>        at
> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>
>        at
> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
>
>        at
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253)
>
>        at
> org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2019-05-06 18:51:07,529 ERROR namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
> JNs are ready for formatting. 3 exceptions thrown:
> 10.0.0.51:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 10.0.0.59:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 10.0.0.57:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
>        at
> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>
>        at
> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
>
>        at
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253)
>
>        at
> org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2019-05-06 18:51:07,534 INFO util.ExitUtil: Exiting with status 1:
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
> JNs are ready for formatting. 3 exceptions thrown:
> 10.0.0.51:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 10.0.0.59:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 10.0.0.57:8485: User root/hdfs-namenode1@HADOOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: this service is
> only accessible by root/10.0.0.4@HADOOP
> 2019-05-06 18:51:07,539 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at hdfs-namenode1/10.0.0.66
> ************************************************************/
>
>
This is quite peculiar because a *docker network inspect MYNETWORKNAME* reveals
that *10.0.0.4* is some odd internal gateway IP used by Docket:

            "lb-adss_default": {
               "Name": "MYNETWORKNAME_default-endpoint",
               "EndpointID":
"ee95098d7b5d7349e7effd2eabf1a421e08b2298e190d9b4d244c6e1fd409183",
               "MacAddress": "02:42:0a:00:00:04",
               "IPv4Address": "10.0.0.4/24",
               "IPv6Address": ""
           }

I've tried adding a principal for root/10.0.0.4@HADOOP and added it to the
keytab being used by my namenode but no luck.

On Fri, May 3, 2019 at 9:11 PM Adam Jorgensen <
adam.jorgensen@spandigital.com> wrote:

> Hmmmmmmmmmmm, this has gotten weird. Thinking the issue was due to the
> issue with reverse DNS not working inside Docker containers I did a bunch
> of rejiggering things such that now each of my contains get's configured
> with principals and keytabs that reflect their actual IP when they're
> running.
>
> Unfortunately it's still failing to format the namenode. A sample error:
>
> 10.0.0.51:8485: DestHost:destPort hdfs-journalnode6:8485 ,
>> LocalHost:localPort hdfs-namenode1/10.0.0.54:0. Failed on local
>> exception: java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentException: Server has invalid Kerberos principal:
>> root
>> /10.0.0.52@HADOOP, expecting: root/10.0.0.54@HADOOP
>>
>
> In this scenario 10.0.0.54 is the IP of the Namenode which is attempting
> to format the Journalnodes.  10.0.0.52 is the IP of the hdfs-journalnode6
> service.
>
> Why is the Namenode expecting the Journalnode to be configured with the
> Namenodes principal? That seems a bit weird to me? Or am I just
> misunderstanding Kerberos and Hadoop dramatically?
>
> On Thu, May 2, 2019 at 4:45 PM Arpit Agarwal <aa...@cloudera.com>
> wrote:
>
>> You can use /etc/hosts entries as a workaround.
>>
>> If this is a PoC/test environment, a less secure workaround is host-less
>> principals. i.e. omit the _HOST pattern. Not usually recommended since
>> service instances will use the same principal and it may be easier to
>> impersonate a service if the keytab is compromised.
>>
>>
>> On Apr 30, 2019, at 1:11 PM, Adam Jorgensen <
>> adam.jorgensen@spandigital.com> wrote:
>>
>> Ahhhhh....reverse DNS you say............oh dear
>>
>> As per https://github.com/docker/for-linux/issues/365 it seems that
>> Reverse DNS has been rather broken in Docker since early 2018 :-(
>>
>> I'm going to have to do some digging to see if I can find some way to fix
>> this I guess, since the other option is a painful dance involving
>> registering Kerberos principals for the specific IPs
>>
>> On Tue, Apr 30, 2019 at 9:56 PM Arpit Agarwal <aa...@cloudera.com>
>> wrote:
>>
>>> Likely your reverse DNS is not configured properly. You can check it by
>>> running ‘dig -x 10.0.0.238’.
>>>
>>>
>>> On Apr 30, 2019, at 12:24 PM, Adam Jorgensen <
>>> adam.jorgensen@spandigital.com> wrote:
>>>
>>> Hi all, my first post here. I'm looking for some help with an issue I'm
>>> having attempting to format my Namenode. I'm running a HA configuration and
>>> have configured Kerberos for authentication.
>>> Additionally, I am running Hadoop using Docker Swarm.
>>>
>>> The issue I'm having is that when I attempt to format the Namenode the
>>> operation fails with complaints that QJM Journalnode do not have a valid
>>> Kerberos principal. However, the issue is more specific in that it seems
>>> like the operation to format the Journalnodes attempts to use a Kerberos
>>> principal of the form SERVICE/IP@REALM whereas the principals I have
>>> configured use the hostname rather than the IP.
>>>
>>> If you take a look at the logging output captured below you will get a
>>> better idea of what the issue is.
>>>
>>> Has anyone run into this before? Is there a way I can tell the Namenode
>>> format to use the correct principals?
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:54,107 INFO namenode.NameNode: STARTUP_MSG:
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>>> /************************************************************
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG: Starting
>>> NameNode
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   host =
>>> hdfs-namenode1/10.0.0.60
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   args =
>>> [-format, -nonInteractive]
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:
>>>   version = 3.1.2
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:
>>>   classpath =
>>> /opt/hadoop-latest/etc/hadoop:/opt/hadoop-latest/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/sha
>>>
>>> re/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-lates
>>>
>>> t/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/had
>>>
>>> oop-latest/share/hadoop/common/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-l
>>>
>>> atest/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:
>>>
>>> /opt/hadoop-latest/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-databi
>>>
>>> nd-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/js
>>>
>>> on-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/co
>>>
>>> mmon/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/
>>>
>>> kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby
>>>
>>> -asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/stax2-api-3.1.
>>>
>>> 4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr311-api-1.1.1.jar
>>>
>>> :/opt/hadoop-latest/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-pkix-1.0.1.ja
>>>
>>> r:/opt/hadoop-latest/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-map
>>>
>>> per-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator
>>>
>>> -recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/sha
>>>
>>> re/hadoop/common/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/h
>>>
>>> adoop-latest/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-
>>>
>>> annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/common
>>>
>>> /lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/common/li
>>>
>>> b/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsc
>>>
>>> h-0.1.54.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2
>>>
>>> -tests.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-kms-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs:/opt/hadoop-latest/share/hadoop
>>>
>>> /hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/common
>>>
>>> s-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/li
>>>
>>> b/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-ut
>>>
>>> il-ajax-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htt
>>>
>>> pcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hado
>>>
>>> op/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop
>>>
>>> /hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-latest/share/hadoop/hdf
>>>
>>> s/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jer
>>>
>>> sey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-crypto-1.0.1
>>>
>>> .jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-
>>>
>>> latest/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/h
>>>
>>> adoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/h
>>>
>>> dfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/h
>>>
>>> dfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/h
>>>
>>> adoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-lat
>>>
>>> est/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt
>>>
>>> /hadoop-latest/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json
>>>
>>> -simple-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/token-
>>>
>>> provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/zo
>>>
>>> okeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-codec-1.11.ja
>>>
>>> r:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-latest/share/ha
>>>
>>> doop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/opt/hadoop-latest/
>>>
>>> share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/opt/hadoop
>>>
>>> -latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-
>>>
>>> latest/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/h
>>>
>>> adoop-mapreduce-client-uploader-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-m
>>>
>>> apreduce-client-jobclient-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce
>>>
>>> -client-nativetask-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn:/opt/hadoop-latest/share/hadoop/y
>>>
>>> arn/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-latest/share/hadoop/
>>>
>>> yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-latest/share/hadoop/yarn
>>>
>>> /lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-latest/share/hadoop
>>>
>>> /yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-latest/share/hadoop/yarn
>>>
>>> /lib/guice-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/d
>>>
>>> nsjava-2.1.7.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar:/opt/hadoop-lat
>>>
>>> est/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-serv
>>>
>>> er-nodemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/opt/hadoop-la
>>>
>>> test/share/hadoop/yarn/hadoop-yarn-server-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/opt/hadoop-lates
>>>
>>> t/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server
>>> -router-3.1.2.jar
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   build
>>> = https://github.com/apache/hadoop.git -r
>>> 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on
>>> 2019-01-29T01:39Z
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   java =
>>> 1.8.0_202
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>>> ************************************************************/
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:54,115 INFO namenode.NameNode: registered UNIX signal handlers for
>>> [TERM, HUP, INT]
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:54,192 INFO namenode.NameNode: createNameNode [-format,
>>> -nonInteractive]
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,034 INFO security.UserGroupInformation: Login successful for user
>>> root/hdfs-namenode1@HADOOP using keytab file /etc/krb5.keytab
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,110 INFO common.Util: Assuming 'file' scheme for path
>>> /opt/hadoop-latest/data in configuration.
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,111 INFO common.Util: Assuming 'file' scheme for path
>>> /opt/hadoop-latest/data in configuration.
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | Formatting using
>>> clusterid: CID-ad09dcfb-0152-4ded-9f72-2be4dfd729b8
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,141 INFO namenode.FSEditLog: Edit logging is async:true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,152 INFO namenode.FSNamesystem: KeyProvider: null
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,153 INFO namenode.FSNamesystem: fsLock is fair: true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,154 INFO namenode.FSNamesystem: Detailed lock hold time metrics
>>> enabled: false
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,154 INFO namenode.FSNamesystem: fsOwner             =
>>> root/hdfs-namenode1@HADOOP (auth:KERBEROS)
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,154 INFO namenode.FSNamesystem: supergroup          = supergroup
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,155 INFO namenode.FSNamesystem: isPermissionEnabled = true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,156 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster
>>> remains unresolved for ID secondary. Check your hdfs-site.xml file to
>>> ensure namenodes are configured properly.
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,156 INFO namenode.FSNamesystem: Determined nameservice ID:
>>> hadoop-hdfs-cluster
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,156 INFO namenode.FSNamesystem: HA Enabled: true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,192 INFO common.Util:
>>> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file
>>> IO profiling
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,203 INFO blockmanagement.DatanodeManager:
>>> dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,203 INFO blockmanagement.DatanodeManager:
>>> dfs.namenode.datanode.registration.ip-hostname-check=false
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,206 INFO blockmanagement.BlockManager:
>>> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,206 INFO blockmanagement.BlockManager: The block deletion will
>>> start around 2019 Apr 30 11:17:55
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,207 INFO util.GSet: Computing capacity for map BlocksMap
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,207 INFO util.GSet: VM type       = 64-bit
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,208 INFO util.GSet: 2.0% max memory 3.5 GB = 70.8 MB
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,208 INFO util.GSet: capacity      = 2^23 = 8388608 entries
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,226 INFO blockmanagement.BlockManager:
>>> dfs.block.access.token.enable = true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,226 INFO blockmanagement.BlockManager:
>>> dfs.block.access.key.update.interval=600 min(s),
>>> dfs.block.access.token.lifetime=600 min(s),
>>> dfs.encrypt.data.transfer.algorithm=null
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,226 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster
>>> remains unresolved for ID secondary. Check your hdfs-site.xml file to
>>> ensure namenodes are configured properly.
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO Configuration.deprecation: No unit for
>>> dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode:
>>> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode:
>>> dfs.namenode.safemode.min.datanodes = 0
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode:
>>> dfs.namenode.safemode.extension = 30000
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManager: defaultReplication
>>>         = 3
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManager: maxReplication
>>>             = 512
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManager: minReplication
>>>             = 1
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,237 INFO blockmanagement.BlockManager: maxReplicationStreams
>>>      = 2
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,240 INFO blockmanagement.BlockManager: redundancyRecheckInterval
>>>  = 3000ms
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,240 INFO blockmanagement.BlockManager: encryptDataTransfer
>>>        = false
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,240 INFO blockmanagement.BlockManager: maxNumBlocksToLog
>>>          = 1000
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,257 INFO namenode.FSDirectory: GLOBAL serial map: bits=24
>>> maxEntries=16777215
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,269 INFO util.GSet: Computing capacity for map INodeMap
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,269 INFO util.GSet: VM type       = 64-bit
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,269 INFO util.GSet: 1.0% max memory 3.5 GB = 35.4 MB
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,269 INFO util.GSet: capacity      = 2^22 = 4194304 entries
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,270 INFO namenode.FSDirectory: ACLs enabled? false
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,270 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,270 INFO namenode.FSDirectory: XAttrs enabled? true
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,270 INFO namenode.NameNode: Caching file names occurring more than
>>> 10 times
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,274 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles:
>>> false, skipCaptureAccessTimeOnlyChange: false,
>>> snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,276 INFO snapshot.SnapshotManager: SkipList is disabled
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,279 INFO util.GSet: Computing capacity for map cachedBlocks
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,279 INFO util.GSet: VM type       = 64-bit
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,279 INFO util.GSet: 0.25% max memory 3.5 GB = 8.8 MB
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,279 INFO util.GSet: capacity      = 2^20 = 1048576 entries
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,318 INFO metrics.TopMetrics: NNTop conf:
>>> dfs.namenode.top.window.num.buckets = 10
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,318 INFO metrics.TopMetrics: NNTop conf:
>>> dfs.namenode.top.num.users = 10
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,319 INFO metrics.TopMetrics: NNTop conf:
>>> dfs.namenode.top.windows.minutes = 1,5,25
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,322 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,322 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
>>> heap and retry cache entry expiry time is 600000 millis
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,323 INFO util.GSet: Computing capacity for map NameNodeRetryCache
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,323 INFO util.GSet: VM type       = 64-bit
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,323 INFO util.GSet: 0.029999999329447746% max memory 3.5 GB = 1.1
>>> MB
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,323 INFO util.GSet: capacity      = 2^17 = 131072 entries
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,347 WARN client.QuorumJournalManager: Quorum journal URI '
>>> qjournal://hdfs-journalnode1:8485;hdfs-journalnode2:8485;hdfs-journalnode3:8485;hdfs-journalnode4:8485;hdfs-journalnode5:8485;hdfs-jou
>>> rnalnode6:8485;hdfs-journalnode7:8485;hdfs-journalnode8:8485/hadoop-hdfs-cluster'
>>> has an even number of Journal Nodes specified. This is not recommended!
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,653 WARN namenode.NameNode: Encountered exception during format:
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
>>> JNs are ready for formatting. 7 exceptions thrown:
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485:
>>> DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485:
>>> DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485:
>>> DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485:
>>> DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485:
>>> DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485:
>>> DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485:
>>> DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155)
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,657 ERROR namenode.NameNode: Failed to start namenode.
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
>>> JNs are ready for formatting. 7 exceptions thrown:
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485:
>>> DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485:
>>> DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485:
>>> DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485:
>>> DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485:
>>> DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485:
>>> DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485:
>>> DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155)
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600)
>>>
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,658 INFO util.ExitUtil: Exiting with status 1:
>>> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
>>> JNs are ready for formatting. 7 exceptions thrown:
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485:
>>> DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485:
>>> DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485:
>>> DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485:
>>> DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485:
>>> DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485:
>>> DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485:
>>> DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort
>>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>>> java.io.IOException: Couldn't set up IO streams:
>>> java.lang.IllegalArgumentExc
>>> eption: Server has invalid Kerberos principal:
>>> root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>>> 11:17:55,660 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>>> /************************************************************
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | SHUTDOWN_MSG:
>>> Shutting down NameNode at hdfs-namenode1/10.0.0.60
>>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>>> ************************************************************/
>>>
>>>
>>>
>>

Re: Issue formatting Namenode in HA cluster using Kerberos

Posted by Adam Jorgensen <ad...@spandigital.com>.
Hmmmmmmmmmmm, this has gotten weird. Thinking the issue was due to the
issue with reverse DNS not working inside Docker containers I did a bunch
of rejiggering things such that now each of my contains get's configured
with principals and keytabs that reflect their actual IP when they're
running.

Unfortunately it's still failing to format the namenode. A sample error:

10.0.0.51:8485: DestHost:destPort hdfs-journalnode6:8485 ,
> LocalHost:localPort hdfs-namenode1/10.0.0.54:0. Failed on local
> exception: java.io.IOException: Couldn't set up IO streams:
> java.lang.IllegalArgumentException: Server has invalid Kerberos principal:
> root
> /10.0.0.52@HADOOP, expecting: root/10.0.0.54@HADOOP
>

In this scenario 10.0.0.54 is the IP of the Namenode which is attempting to
format the Journalnodes.  10.0.0.52 is the IP of the hdfs-journalnode6
service.

Why is the Namenode expecting the Journalnode to be configured with the
Namenodes principal? That seems a bit weird to me? Or am I just
misunderstanding Kerberos and Hadoop dramatically?

On Thu, May 2, 2019 at 4:45 PM Arpit Agarwal <aa...@cloudera.com> wrote:

> You can use /etc/hosts entries as a workaround.
>
> If this is a PoC/test environment, a less secure workaround is host-less
> principals. i.e. omit the _HOST pattern. Not usually recommended since
> service instances will use the same principal and it may be easier to
> impersonate a service if the keytab is compromised.
>
>
> On Apr 30, 2019, at 1:11 PM, Adam Jorgensen <
> adam.jorgensen@spandigital.com> wrote:
>
> Ahhhhh....reverse DNS you say............oh dear
>
> As per https://github.com/docker/for-linux/issues/365 it seems that
> Reverse DNS has been rather broken in Docker since early 2018 :-(
>
> I'm going to have to do some digging to see if I can find some way to fix
> this I guess, since the other option is a painful dance involving
> registering Kerberos principals for the specific IPs
>
> On Tue, Apr 30, 2019 at 9:56 PM Arpit Agarwal <aa...@cloudera.com>
> wrote:
>
>> Likely your reverse DNS is not configured properly. You can check it by
>> running ‘dig -x 10.0.0.238’.
>>
>>
>> On Apr 30, 2019, at 12:24 PM, Adam Jorgensen <
>> adam.jorgensen@spandigital.com> wrote:
>>
>> Hi all, my first post here. I'm looking for some help with an issue I'm
>> having attempting to format my Namenode. I'm running a HA configuration and
>> have configured Kerberos for authentication.
>> Additionally, I am running Hadoop using Docker Swarm.
>>
>> The issue I'm having is that when I attempt to format the Namenode the
>> operation fails with complaints that QJM Journalnode do not have a valid
>> Kerberos principal. However, the issue is more specific in that it seems
>> like the operation to format the Journalnodes attempts to use a Kerberos
>> principal of the form SERVICE/IP@REALM whereas the principals I have
>> configured use the hostname rather than the IP.
>>
>> If you take a look at the logging output captured below you will get a
>> better idea of what the issue is.
>>
>> Has anyone run into this before? Is there a way I can tell the Namenode
>> format to use the correct principals?
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:54,107 INFO namenode.NameNode: STARTUP_MSG:
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>> /************************************************************
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG: Starting
>> NameNode
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   host =
>> hdfs-namenode1/10.0.0.60
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   args =
>> [-format, -nonInteractive]
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   version
>> = 3.1.2
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:
>>   classpath =
>> /opt/hadoop-latest/etc/hadoop:/opt/hadoop-latest/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/sha
>>
>> re/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-lates
>>
>> t/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/had
>>
>> oop-latest/share/hadoop/common/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-l
>>
>> atest/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:
>>
>> /opt/hadoop-latest/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-databi
>>
>> nd-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/js
>>
>> on-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/co
>>
>> mmon/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/
>>
>> kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby
>>
>> -asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/stax2-api-3.1.
>>
>> 4.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr311-api-1.1.1.jar
>>
>> :/opt/hadoop-latest/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerby-pkix-1.0.1.ja
>>
>> r:/opt/hadoop-latest/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-map
>>
>> per-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/curator
>>
>> -recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/sha
>>
>> re/hadoop/common/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/h
>>
>> adoop-latest/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/common/lib/hadoop-
>>
>> annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/common
>>
>> /lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/common/li
>>
>> b/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop-latest/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsc
>>
>> h-0.1.54.jar:/opt/hadoop-latest/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop-latest/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2
>>
>> -tests.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/common/hadoop-kms-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs:/opt/hadoop-latest/share/hadoop
>>
>> /hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/common
>>
>> s-lang3-3.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-latest/share/hadoop/hdfs/li
>>
>> b/kerb-simplekdc-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-ut
>>
>> il-ajax-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htt
>>
>> pcore-4.4.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hado
>>
>> op/hdfs/lib/paranamer-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-latest/share/hadoop
>>
>> /hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-latest/share/hadoop/hdf
>>
>> s/lib/curator-framework-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jer
>>
>> sey-json-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-crypto-1.0.1
>>
>> .jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop-
>>
>> latest/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-latest/share/h
>>
>> adoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/h
>>
>> dfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-latest/share/hadoop/h
>>
>> dfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop-latest/share/h
>>
>> adoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop-lat
>>
>> est/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt
>>
>> /hadoop-latest/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/json
>>
>> -simple-1.1.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/token-
>>
>> provider-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/zo
>>
>> okeeper-3.4.13.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-codec-1.11.ja
>>
>> r:/opt/hadoop-latest/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop-latest/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-latest/share/ha
>>
>> doop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/opt/hadoop-latest/
>>
>> share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/opt/hadoop
>>
>> -latest/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-
>>
>> latest/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/h
>>
>> adoop-mapreduce-client-uploader-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-m
>>
>> apreduce-client-jobclient-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce
>>
>> -client-nativetask-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/opt/hadoop-latest/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn:/opt/hadoop-latest/share/hadoop/y
>>
>> arn/lib/metrics-core-3.2.4.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop-latest/share/hadoop/
>>
>> yarn/lib/guice-servlet-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-latest/share/hadoop/yarn
>>
>> /lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-latest/share/hadoop
>>
>> /yarn/lib/jersey-guice-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop-latest/share/hadoop/yarn
>>
>> /lib/guice-4.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-latest/share/hadoop/yarn/lib/d
>>
>> nsjava-2.1.7.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-core-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar:/opt/hadoop-lat
>>
>> est/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-serv
>>
>> er-nodemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/opt/hadoop-la
>>
>> test/share/hadoop/yarn/hadoop-yarn-server-common-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/opt/hadoop-lates
>>
>> t/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/opt/hadoop-latest/share/hadoop/yarn/hadoop-yarn-server
>> -router-3.1.2.jar
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   build =
>> https://github.com/apache/hadoop.git -r
>> 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on
>> 2019-01-29T01:39Z
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | STARTUP_MSG:   java =
>> 1.8.0_202
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>> ************************************************************/
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:54,115 INFO namenode.NameNode: registered UNIX signal handlers for
>> [TERM, HUP, INT]
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:54,192 INFO namenode.NameNode: createNameNode [-format,
>> -nonInteractive]
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,034 INFO security.UserGroupInformation: Login successful for user
>> root/hdfs-namenode1@HADOOP using keytab file /etc/krb5.keytab
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,110 INFO common.Util: Assuming 'file' scheme for path
>> /opt/hadoop-latest/data in configuration.
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,111 INFO common.Util: Assuming 'file' scheme for path
>> /opt/hadoop-latest/data in configuration.
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | Formatting using
>> clusterid: CID-ad09dcfb-0152-4ded-9f72-2be4dfd729b8
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,141 INFO namenode.FSEditLog: Edit logging is async:true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,152 INFO namenode.FSNamesystem: KeyProvider: null
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,153 INFO namenode.FSNamesystem: fsLock is fair: true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,154 INFO namenode.FSNamesystem: Detailed lock hold time metrics
>> enabled: false
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,154 INFO namenode.FSNamesystem: fsOwner             =
>> root/hdfs-namenode1@HADOOP (auth:KERBEROS)
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,154 INFO namenode.FSNamesystem: supergroup          = supergroup
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,155 INFO namenode.FSNamesystem: isPermissionEnabled = true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,156 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster
>> remains unresolved for ID secondary. Check your hdfs-site.xml file to
>> ensure namenodes are configured properly.
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,156 INFO namenode.FSNamesystem: Determined nameservice ID:
>> hadoop-hdfs-cluster
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,156 INFO namenode.FSNamesystem: HA Enabled: true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,192 INFO common.Util:
>> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file
>> IO profiling
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,203 INFO blockmanagement.DatanodeManager:
>> dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,203 INFO blockmanagement.DatanodeManager:
>> dfs.namenode.datanode.registration.ip-hostname-check=false
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,206 INFO blockmanagement.BlockManager:
>> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,206 INFO blockmanagement.BlockManager: The block deletion will
>> start around 2019 Apr 30 11:17:55
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,207 INFO util.GSet: Computing capacity for map BlocksMap
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,207 INFO util.GSet: VM type       = 64-bit
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,208 INFO util.GSet: 2.0% max memory 3.5 GB = 70.8 MB
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,208 INFO util.GSet: capacity      = 2^23 = 8388608 entries
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,226 INFO blockmanagement.BlockManager:
>> dfs.block.access.token.enable = true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,226 INFO blockmanagement.BlockManager:
>> dfs.block.access.key.update.interval=600 min(s),
>> dfs.block.access.token.lifetime=600 min(s),
>> dfs.encrypt.data.transfer.algorithm=null
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,226 WARN hdfs.DFSUtilClient: Namenode for hadoop-hdfs-cluster
>> remains unresolved for ID secondary. Check your hdfs-site.xml file to
>> ensure namenodes are configured properly.
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO Configuration.deprecation: No unit for
>> dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode:
>> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode:
>> dfs.namenode.safemode.min.datanodes = 0
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManagerSafeMode:
>> dfs.namenode.safemode.extension = 30000
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManager: defaultReplication
>>         = 3
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManager: maxReplication
>>             = 512
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManager: minReplication
>>             = 1
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,237 INFO blockmanagement.BlockManager: maxReplicationStreams
>>      = 2
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,240 INFO blockmanagement.BlockManager: redundancyRecheckInterval
>>  = 3000ms
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,240 INFO blockmanagement.BlockManager: encryptDataTransfer
>>        = false
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,240 INFO blockmanagement.BlockManager: maxNumBlocksToLog
>>          = 1000
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,257 INFO namenode.FSDirectory: GLOBAL serial map: bits=24
>> maxEntries=16777215
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,269 INFO util.GSet: Computing capacity for map INodeMap
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,269 INFO util.GSet: VM type       = 64-bit
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,269 INFO util.GSet: 1.0% max memory 3.5 GB = 35.4 MB
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,269 INFO util.GSet: capacity      = 2^22 = 4194304 entries
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,270 INFO namenode.FSDirectory: ACLs enabled? false
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,270 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,270 INFO namenode.FSDirectory: XAttrs enabled? true
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,270 INFO namenode.NameNode: Caching file names occurring more than
>> 10 times
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,274 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles:
>> false, skipCaptureAccessTimeOnlyChange: false,
>> snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,276 INFO snapshot.SnapshotManager: SkipList is disabled
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,279 INFO util.GSet: Computing capacity for map cachedBlocks
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,279 INFO util.GSet: VM type       = 64-bit
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,279 INFO util.GSet: 0.25% max memory 3.5 GB = 8.8 MB
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,279 INFO util.GSet: capacity      = 2^20 = 1048576 entries
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,318 INFO metrics.TopMetrics: NNTop conf:
>> dfs.namenode.top.window.num.buckets = 10
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,318 INFO metrics.TopMetrics: NNTop conf:
>> dfs.namenode.top.num.users = 10
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,319 INFO metrics.TopMetrics: NNTop conf:
>> dfs.namenode.top.windows.minutes = 1,5,25
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,322 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,322 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
>> heap and retry cache entry expiry time is 600000 millis
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,323 INFO util.GSet: Computing capacity for map NameNodeRetryCache
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,323 INFO util.GSet: VM type       = 64-bit
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,323 INFO util.GSet: 0.029999999329447746% max memory 3.5 GB = 1.1
>> MB
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,323 INFO util.GSet: capacity      = 2^17 = 131072 entries
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,347 WARN client.QuorumJournalManager: Quorum journal URI '
>> qjournal://hdfs-journalnode1:8485;hdfs-journalnode2:8485;hdfs-journalnode3:8485;hdfs-journalnode4:8485;hdfs-journalnode5:8485;hdfs-jou
>> rnalnode6:8485;hdfs-journalnode7:8485;hdfs-journalnode8:8485/hadoop-hdfs-cluster'
>> has an even number of Journal Nodes specified. This is not recommended!
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,653 WARN namenode.NameNode: Encountered exception during format:
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
>> JNs are ready for formatting. 7 exceptions thrown:
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485:
>> DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485:
>> DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485:
>> DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485:
>> DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485:
>> DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485:
>> DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485:
>> DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155)
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,657 ERROR namenode.NameNode: Failed to start namenode.
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
>> JNs are ready for formatting. 7 exceptions thrown:
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485:
>> DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485:
>> DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485:
>> DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485:
>> DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485:
>> DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485:
>> DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485:
>> DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:253)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1142)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:196)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1155)
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1600)
>>
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |       at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,658 INFO util.ExitUtil: Exiting with status 1:
>> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if
>> JNs are ready for formatting. 7 exceptions thrown:
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.236:8485:
>> DestHost:destPort hdfs-journalnode5:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode5@HADOOP, expecting: root/10.0.0.236@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.252:8485:
>> DestHost:destPort hdfs-journalnode6:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode6@HADOOP, expecting: root/10.0.0.252@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.250:8485:
>> DestHost:destPort hdfs-journalnode1:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode1@HADOOP, expecting: root/10.0.0.250@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.240:8485:
>> DestHost:destPort hdfs-journalnode8:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode8@HADOOP, expecting: root/10.0.0.240@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.232:8485:
>> DestHost:destPort hdfs-journalnode2:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode2@HADOOP, expecting: root/10.0.0.232@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.244:8485:
>> DestHost:destPort hdfs-journalnode7:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode7@HADOOP, expecting: root/10.0.0.244@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 10.0.0.238:8485:
>> DestHost:destPort hdfs-journalnode3:8485 , LocalHost:localPort
>> hdfs-namenode1/10.0.0.60:0. Failed on local exception:
>> java.io.IOException: Couldn't set up IO streams:
>> java.lang.IllegalArgumentExc
>> eption: Server has invalid Kerberos principal:
>> root/hdfs-journalnode3@HADOOP, expecting: root/10.0.0.238@HADOOP
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | 2019-04-30
>> 11:17:55,660 INFO namenode.NameNode: SHUTDOWN_MSG:
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>> /************************************************************
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    | SHUTDOWN_MSG: Shutting
>> down NameNode at hdfs-namenode1/10.0.0.60
>> adss_hdfs-namenode1.1.ybygx5r50v8y@Harvester    |
>> ************************************************************/
>>
>>
>>
>