You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Anand Murali <an...@yahoo.com> on 2015/04/28 08:01:04 UTC

Name node starting intermittently

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Please add these lines in hdfs-site.xml. Please create the directories if
doesn't exist. You may choose other directory name if you want.









*<property><name>dfs.name.dir</name><value>/opt/hadoop/hdfs/name</value></property><property><name>dfs.data.dir</name><value>/opt/hadoop/hdfs/data</value></property>*

and then format namenode
*hadoop namenode -format*
and then start hadoop daemons using the script.

Regards,
Ravindra



On Tue, Apr 28, 2015 at 12:03 PM, Anand Murali <an...@yahoo.com>
wrote:

>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Could you please post your hdfs-site.xml
>
> Regards,
> Ravindra
>
> On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Please add these lines in hdfs-site.xml. Please create the directories if
doesn't exist. You may choose other directory name if you want.









*<property><name>dfs.name.dir</name><value>/opt/hadoop/hdfs/name</value></property><property><name>dfs.data.dir</name><value>/opt/hadoop/hdfs/data</value></property>*

and then format namenode
*hadoop namenode -format*
and then start hadoop daemons using the script.

Regards,
Ravindra



On Tue, Apr 28, 2015 at 12:03 PM, Anand Murali <an...@yahoo.com>
wrote:

>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Could you please post your hdfs-site.xml
>
> Regards,
> Ravindra
>
> On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Please add these lines in hdfs-site.xml. Please create the directories if
doesn't exist. You may choose other directory name if you want.









*<property><name>dfs.name.dir</name><value>/opt/hadoop/hdfs/name</value></property><property><name>dfs.data.dir</name><value>/opt/hadoop/hdfs/data</value></property>*

and then format namenode
*hadoop namenode -format*
and then start hadoop daemons using the script.

Regards,
Ravindra



On Tue, Apr 28, 2015 at 12:03 PM, Anand Murali <an...@yahoo.com>
wrote:

>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Could you please post your hdfs-site.xml
>
> Regards,
> Ravindra
>
> On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Please add these lines in hdfs-site.xml. Please create the directories if
doesn't exist. You may choose other directory name if you want.









*<property><name>dfs.name.dir</name><value>/opt/hadoop/hdfs/name</value></property><property><name>dfs.data.dir</name><value>/opt/hadoop/hdfs/data</value></property>*

and then format namenode
*hadoop namenode -format*
and then start hadoop daemons using the script.

Regards,
Ravindra



On Tue, Apr 28, 2015 at 12:03 PM, Anand Murali <an...@yahoo.com>
wrote:

>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Could you please post your hdfs-site.xml
>
> Regards,
> Ravindra
>
> On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Could you please post your hdfs-site.xml 

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com> wrote:

Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



   



  

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Could you please post your hdfs-site.xml 

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com> wrote:

Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



   



  

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Could you please post your hdfs-site.xml 

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com> wrote:

Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



   



  

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 12:00 PM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Could you please post your hdfs-site.xml 

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com> wrote:

Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



   



  

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Could you please post your hdfs-site.xml

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
wrote:

> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Could you please post your hdfs-site.xml

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
wrote:

> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Could you please post your hdfs-site.xml

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
wrote:

> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Could you please post your hdfs-site.xml

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali <an...@yahoo.com>
wrote:

> Ravindra:
>
> I am trying to use Hadoop out of the box. Please provide a remedy to fix
> this. I shall be thankful. I am a beginner with Hadoop.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <
> ravin.iitb@gmail.com> wrote:
>
>
> Hi,
>
> Using /tmp/ folder for hdfs storage directory is not a good idea.
> The /tmp/ directory is wiped out after reboot.
>
> Regards,
> Ravindra
>
>
> On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



  

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



  

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



  

Re: Name node starting intermittently

Posted by Anand Murali <an...@yahoo.com>.
Ravindra:
I am trying to use Hadoop out of the box. Please provide a remedy to fix this. I shall be thankful. I am a beginner with Hadoop.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik <ra...@gmail.com> wrote:
   

 Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday there was normal startup and shutdown couple of times. This morning it is not so. Find below section of log file. Shall be thankful if somebody can advise.

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2015-04-28 11:21:48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-04-28 11:21:50,070 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21:50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-04-28 11:21:50,216 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = anand_vihar (auth:SIMPLE)
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-04-28 11:21:50,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-04-28 11:21:50,970 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
************************************************************/

Regards, Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)



  

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Name node starting intermittently

Posted by Ravindra Kumar Naik <ra...@gmail.com>.
Hi,

Using /tmp/ folder for hdfs storage directory is not a good idea.
The /tmp/ directory is wiped out after reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday
> there was normal startup and shutdown couple of times. This morning it is
> not so. Find below section of log file. Shall be thankful if somebody can
> advise.
>
>
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git
> -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on
> 2014-11-13T21:10Z
> STARTUP_MSG:   java = 1.7.0_75
> ************************************************************/
> 2015-04-28 11:21:48,167 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2015-04-28 11:21:48,176 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
> 2015-04-28 11:21:48,574 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2015-04-28 11:21:48,805 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is
> hdfs://localhost:9000
> 2015-04-28 11:21:48,806 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use
> localhost:9000 to access this namenode/service.
> 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.namenode is not defined
> 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context hdfs
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added
> filter 'org.apache.hadoop.hdfs.web.AuthFilter'
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50070
> 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26
> 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to
> lack of redundant storage directories!
> 2015-04-28 11:21:50,021 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace
> edits storage directory (dfs.namenode.edits.dir) configured. Beware of data
> loss due to lack of redundant storage directories!
> 2015-04-28 11:21:50,070 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
> 2015-04-28 11:21:50,138 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
> 2015-04-28 11:21:50,215 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2015-04-28 11:21:50,216 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.namenode.datanode.registration.ip-hostname-check=true
> 2015-04-28 11:21:50,243 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
> 2015-04-28 11:21:50,244 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block
> deletion will start around 2015 Apr 28 11:21:50
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory
> 889 MB = 17.8 MB
> 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^21 = 2097152 entries
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication         = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication             = 512
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication             = 1
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams      = 2
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer        = false
> 2015-04-28 11:21:50,274 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxNumBlocksToLog          = 1000
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             =
> anand_vihar (auth:SIMPLE)
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          =
> supergroup
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =
> true
> 2015-04-28 11:21:50,278 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
> 2015-04-28 11:21:50,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map INodeMap
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory
> 889 MB = 8.9 MB
> 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^20 = 1048576 entries
> 2015-04-28 11:21:50,777 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map cachedBlocks
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory
> 889 MB = 2.2 MB
> 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^18 = 262144 entries
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 2015-04-28 11:21:50,828 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 30000
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
> namenode is enabled
> 2015-04-28 11:21:50,829 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use
> 0.03 of total heap and retry cache entry expiry time is 600000 millis
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map NameNodeRetryCache
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type       =
> 64-bit
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity      =
> 2^15 = 32768 entries
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
> 2015-04-28 11:21:50,833 INFO
> org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr:
> 16384
> 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist
> 2015-04-28 11:21:50,835 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception
> loading fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-04-28 11:21:50,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2015-04-28 11:21:50,970 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2015-04-28 11:21:50,970 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
> 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2015-04-28 11:21:50,973 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1
> ************************************************************/
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>