You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Arulanand Dayalan <ar...@yahoo.com.INVALID> on 2016/05/25 18:11:23 UTC

Hadoop Security - NN fails to connect to JN on different servers

Hi,
I had setup Hadoop HA Cluster(QJM) with kerberos. Cluster has three journal nodes. Namenode is able to connectto journal node running on the Namenode. For other Journal nodes its not able to connect and throws below exception.Can you please provide inputs for resolving the issue.
2016-05-25 23:09:47,526 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = server1.cehdev.com/10.67.169.45STARTUP_MSG:   args = []STARTUP_MSG:   version = 2.5.2STARTUP_MSG:   classpath = STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45ZSTARTUP_MSG:   java = 1.7.0_79************************************************************/2016-05-25 23:09:47,550 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2016-05-25 23:09:47,557 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2016-05-25 23:09:48,147 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2016-05-25 23:09:48,307 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2016-05-25 23:09:48,307 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2016-05-25 23:09:48,309 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://hdcluster2016-05-25 23:09:48,310 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use hdcluster to access this namenode/service.2016-05-25 23:09:49,331 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/server1.cehdev.com@CEHDEV.COM using keytab file /etc/security/keytab/hdfs.service.keytab2016-05-25 23:09:49,392 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: HTTP/server1.cehdev.com@CEHDEV.COM2016-05-25 23:09:49,392 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://server1.cehdev.com:500702016-05-25 23:09:49,480 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2016-05-25 23:09:49,487 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined2016-05-25 23:09:49,504 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)2016-05-25 23:09:49,510 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2016-05-25 23:09:49,510 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs2016-05-25 23:09:49,511 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2016-05-25 23:09:49,563 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2016-05-25 23:09:49,566 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2016-05-25 23:09:49,573 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to getDelegationToken2016-05-25 23:09:49,575 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to renewDelegationToken2016-05-25 23:09:49,577 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to cancelDelegationToken2016-05-25 23:09:49,579 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to fsck2016-05-25 23:09:49,581 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to imagetransfer2016-05-25 23:09:49,593 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 500702016-05-25 23:09:49,593 INFO org.mortbay.log: jetty-6.1.262016-05-25 23:09:49,954 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/security/keytab/hdfs.service.keytab, for principal HTTP/server1.cehdev.com@CEHDEV.COM2016-05-25 23:09:49,965 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret2016-05-25 23:09:49,967 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/security/keytab/hdfs.service.keytab, for principal HTTP/server1.cehdev.com@CEHDEV.COM2016-05-25 23:09:49,973 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret2016-05-25 23:09:50,037 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@server1.cehdev.com:500702016-05-25 23:09:50,084 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.2016-05-25 23:09:50,085 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.2016-05-25 23:09:50,087 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!2016-05-25 23:09:50,095 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.2016-05-25 23:09:50,096 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.2016-05-25 23:09:50,153 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true2016-05-25 23:09:50,219 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=10002016-05-25 23:09:50,220 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true2016-05-25 23:09:50,223 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.0002016-05-25 23:09:50,223 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2016 May 25 23:09:502016-05-25 23:09:50,226 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap2016-05-25 23:09:50,226 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit2016-05-25 23:09:50,228 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB2016-05-25 23:09:50,228 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries2016-05-25 23:09:50,235 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=true2016-05-25 23:09:50,235 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=rc42016-05-25 23:09:50,245 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 22016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 5122016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 12016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 22016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false2016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 30002016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = true2016-05-25 23:09:50,246 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 10002016-05-25 23:09:50,247 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hdfs/server1.cehdev.com@CEHDEV.COM (auth:KERBEROS)2016-05-25 23:09:50,247 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = hadoop2016-05-25 23:09:50,247 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true2016-05-25 23:09:50,248 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: hdcluster2016-05-25 23:09:50,248 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true2016-05-25 23:09:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true2016-05-25 23:09:50,297 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap2016-05-25 23:09:50,297 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit2016-05-25 23:09:50,298 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB2016-05-25 23:09:50,298 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries2016-05-25 23:09:50,299 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries2016-05-25 23:09:50,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.99900001287460332016-05-25 23:09:50,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 02016-05-25 23:09:50,312 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 300002016-05-25 23:09:50,313 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled2016-05-25 23:09:50,314 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries2016-05-25 23:09:50,323 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false2016-05-25 23:09:50,323 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true2016-05-25 23:09:50,323 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 163842016-05-25 23:09:50,334 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/hadoop/dfs/name/in_use.lock acquired by nodename 23849@server1.cehdev.com2016-05-25 23:09:51,198 WARN org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded2016-05-25 23:09:51,785 WARN org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/server2.cehdev.com@CEHDEV.COM2016-05-25 23:09:51,786 WARN org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/server3.cehdev.com@CEHDEV.COM2016-05-25 23:09:51,924 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [10.67.169.45:8485, 10.67.169.49:8485, 10.67.169.46:8485]. Skipping.org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 1 successful responses:10.67.169.45:8485: []2 exceptions thrown:10.67.169.46:8485: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/server2.cehdev.com@CEHDEV.COM; Host Details : local host is: "server1.cehdev.com/10.67.169.45"; destination host is: "server2.cehdev.com":8485;10.67.169.49:8485: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/server3.cehdev.com@CEHDEV.COM; Host Details : local host is: "server1.cehdev.com/10.67.169.45"; destination host is: "server3.cehdev.com":8485;        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:260)        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1430)        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1450)        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:636)        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:279)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
Regards,Arul.

Re: Hadoop Security - NN fails to connect to JN on different servers

Posted by Gagan Brahmi <ga...@gmail.com>.
One of the potential problem might be with the hostname
configurations. If you are using the host file to resolve the DNS
please verify the hostnames set in the host file. The problem looks to
be related with the invalid principal name which can be due to bad
hostname to IP mapping.


Regards,
Gagan Brahmi

On Wed, May 25, 2016 at 11:11 AM, Arulanand Dayalan
<ar...@yahoo.com.invalid> wrote:
> Hi,
>
> I had setup Hadoop HA Cluster(QJM) with kerberos. Cluster has three journal
> nodes. Namenode is able to connect
> to journal node running on the Namenode. For other Journal nodes its not
> able to connect and throws below exception.
> Can you please provide inputs for resolving the issue.
>
> 2016-05-25 23:09:47,526 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = server1.cehdev.com/10.67.169.45
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.5.2
> STARTUP_MSG:   classpath =
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on
> 2014-11-14T23:45Z
> STARTUP_MSG:   java = 1.7.0_79
> ************************************************************/
> 2016-05-25 23:09:47,550 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2016-05-25 23:09:47,557 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2016-05-25 23:09:48,147 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
> 2016-05-25 23:09:48,307 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2016-05-25 23:09:48,307 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2016-05-25 23:09:48,309 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://hdcluster
> 2016-05-25 23:09:48,310 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> hdcluster to access this namenode/service.
> 2016-05-25 23:09:49,331 INFO
> org.apache.hadoop.security.UserGroupInformation: Login successful for user
> hdfs/server1.cehdev.com@CEHDEV.COM using keytab file
> /etc/security/keytab/hdfs.service.keytab
> 2016-05-25 23:09:49,392 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web
> server as: HTTP/server1.cehdev.com@CEHDEV.COM
> 2016-05-25 23:09:49,392 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://server1.cehdev.com:50070
> 2016-05-25 23:09:49,480 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2016-05-25 23:09:49,487 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2016-05-25 23:09:49,504 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-05-25 23:09:49,510 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2016-05-25 23:09:49,510 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2016-05-25 23:09:49,511 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2016-05-25 23:09:49,563 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2016-05-25 23:09:49,566 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2016-05-25 23:09:49,573 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to getDelegationToken
> 2016-05-25 23:09:49,575 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to renewDelegationToken
> 2016-05-25 23:09:49,577 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to cancelDelegationToken
> 2016-05-25 23:09:49,579 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to fsck
> 2016-05-25 23:09:49,581 INFO org.apache.hadoop.http.HttpServer2: Adding
> Kerberos (SPNEGO) filter to imagetransfer
> 2016-05-25 23:09:49,593 INFO org.apache.hadoop.http.HttpServer2: Jetty bound
> to port 50070
> 2016-05-25 23:09:49,593 INFO org.mortbay.log: jetty-6.1.26
> 2016-05-25 23:09:49,954 INFO
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
> Login using keytab /etc/security/keytab/hdfs.service.keytab, for principal
> HTTP/server1.cehdev.com@CEHDEV.COM
> 2016-05-25 23:09:49,965 WARN
> org.apache.hadoop.security.authentication.server.AuthenticationFilter:
> 'signature.secret' configuration not set, using a random value as secret
> 2016-05-25 23:09:49,967 INFO
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
> Login using keytab /etc/security/keytab/hdfs.service.keytab, for principal
> HTTP/server1.cehdev.com@CEHDEV.COM
> 2016-05-25 23:09:49,973 WARN
> org.apache.hadoop.security.authentication.server.AuthenticationFilter:
> 'signature.secret' configuration not set, using a random value as secret
> 2016-05-25 23:09:50,037 INFO org.mortbay.log: Started
> HttpServer2$SelectChannelConnectorWithSafeStartup@server1.cehdev.com:50070
> 2016-05-25 23:09:50,084 WARN org.apache.hadoop.hdfs.server.common.Util: Path
> /app/hadoop/dfs/name should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2016-05-25 23:09:50,085 WARN org.apache.hadoop.hdfs.server.common.Util: Path
> /app/hadoop/dfs/name should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2016-05-25 23:09:50,087 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2016-05-25 23:09:50,095 WARN org.apache.hadoop.hdfs.server.common.Util: Path
> /app/hadoop/dfs/name should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2016-05-25 23:09:50,096 WARN org.apache.hadoop.hdfs.server.common.Util: Path
> /app/hadoop/dfs/name should be specified as a URI in configuration files.
> Please update hdfs configuration.
> 2016-05-25 23:09:50,153 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2016-05-25 23:09:50,219 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2016-05-25 23:09:50,220 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2016-05-25 23:09:50,223 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2016-05-25 23:09:50,223 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2016 May 25 23:09:50
> 2016-05-25 23:09:50,226 INFO org.apache.hadoop.util.GSet: Computing capacity
> for map BlocksMap
> 2016-05-25 23:09:50,226 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2016-05-25 23:09:50,228 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2016-05-25 23:09:50,228 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2016-05-25 23:09:50,235 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=true
> 2016-05-25 23:09:50,235 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.key.update.interval=600 min(s),
> dfs.block.access.token.lifetime=600 min(s),
> dfs.encrypt.data.transfer.algorithm=rc4
> 2016-05-25 23:09:50,245 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 2
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication
> = 512
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication
> = 1
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = true
> 2016-05-25 23:09:50,246 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2016-05-25 23:09:50,247 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> hdfs/server1.cehdev.com@CEHDEV.COM (auth:KERBEROS)
> 2016-05-25 23:09:50,247 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> hadoop
> 2016-05-25 23:09:50,247 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2016-05-25 23:09:50,248 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice
> ID: hdcluster
> 2016-05-25 23:09:50,248 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
> 2016-05-25 23:09:50,250 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2016-05-25 23:09:50,297 INFO org.apache.hadoop.util.GSet: Computing capacity
> for map INodeMap
> 2016-05-25 23:09:50,297 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2016-05-25 23:09:50,298 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2016-05-25 23:09:50,298 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2016-05-25 23:09:50,299 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring
> more than 10 times
> 2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: Computing capacity
> for map cachedBlocks
> 2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2016-05-25 23:09:50,309 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2016-05-25 23:09:50,312 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2016-05-25 23:09:50,312 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2016-05-25 23:09:50,312 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2016-05-25 23:09:50,313 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode
> is enabled
> 2016-05-25 23:09:50,314 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: Computing capacity
> for map NameNodeRetryCache
> 2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2016-05-25 23:09:50,317 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2016-05-25 23:09:50,323 INFO org.apache.hadoop.hdfs.server.namenode.NNConf:
> ACLs enabled? false
> 2016-05-25 23:09:50,323 INFO org.apache.hadoop.hdfs.server.namenode.NNConf:
> XAttrs enabled? true
> 2016-05-25 23:09:50,323 INFO org.apache.hadoop.hdfs.server.namenode.NNConf:
> Maximum size of an xattr: 16384
> 2016-05-25 23:09:50,334 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /app/hadoop/dfs/name/in_use.lock acquired by nodename
> 23849@server1.cehdev.com
> 2016-05-25 23:09:51,198 WARN
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property
> 'ssl.client.truststore.location' has not been set, no TrustStore will be
> loaded
> 2016-05-25 23:09:51,785 WARN org.apache.hadoop.ipc.Client: Exception
> encountered while connecting to the server :
> java.lang.IllegalArgumentException: Server has invalid Kerberos principal:
> hdfs/server2.cehdev.com@CEHDEV.COM
> 2016-05-25 23:09:51,786 WARN org.apache.hadoop.ipc.Client: Exception
> encountered while connecting to the server :
> java.lang.IllegalArgumentException: Server has invalid Kerberos principal:
> hdfs/server3.cehdev.com@CEHDEV.COM
> 2016-05-25 23:09:51,924 WARN
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input
> streams from QJM to [10.67.169.45:8485, 10.67.169.49:8485,
> 10.67.169.46:8485]. Skipping.
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many
> exceptions to achieve quorum size 2/3. 1 successful responses:
> 10.67.169.45:8485: []
> 2 exceptions thrown:
> 10.67.169.46:8485: Failed on local exception: java.io.IOException:
> java.lang.IllegalArgumentException: Server has invalid Kerberos principal:
> hdfs/server2.cehdev.com@CEHDEV.COM; Host Details : local host is:
> "server1.cehdev.com/10.67.169.45"; destination host is:
> "server2.cehdev.com":8485;
> 10.67.169.49:8485: Failed on local exception: java.io.IOException:
> java.lang.IllegalArgumentException: Server has invalid Kerberos principal:
> hdfs/server3.cehdev.com@CEHDEV.COM; Host Details : local host is:
> "server1.cehdev.com/10.67.169.45"; destination host is:
> "server3.cehdev.com":8485;
>         at
> org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
>         at
> org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
>         at
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
>         at
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
>         at
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:260)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1430)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1450)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:636)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
>
> Regards,
> Arul.
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org