You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Peter Thygesen <th...@infopaq.dk> on 2007/12/17 11:52:32 UTC

Namenode gone bad! [0.15.1]

I was trying to get HBase to work, when I noticed that the regionservers
started to fail. The dfs namenode ran out of disk space. I stoped all
hbase, dfs services and quickly made some space by deleting a lot of
logs files. When I restarted the dfs I got the following error message.

 

How do I recover? Or can I correct the problem?

 

Help..

 

Thanks,

Peter

 

I'm running version 0.15.1

 

 

2007-12-17 11:13:48,449 INFO org.apache.hadoop.dfs.NameNode:
STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = hadoopmaster/192.168.0.129

STARTUP_MSG:   args = []

************************************************************/

2007-12-17 11:13:48,920 INFO org.apache.hadoop.dfs.NameNode: Namenode up
at: hadoopmaster/192.168.0.129:54310

2007-12-17 11:13:48,931 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null

2007-12-17 11:14:00,367 INFO org.apache.hadoop.ipc.Server: Stopping
server on 54310

2007-12-17 11:14:00,373 ERROR org.apache.hadoop.dfs.NameNode:
java.io.EOFException

        at java.io.DataInputStream.readFully(DataInputStream.java:180)

        at org.apache.hadoop.io.UTF8.readFields(UTF8.java:106)

        at
org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java:90)

        at
org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:544)

        at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:736)

        at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:620)

        at
org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:222)

        at
org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:76)

        at
org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:221)

        at org.apache.hadoop.dfs.NameNode.init(NameNode.java:130)

        at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:168)

        at
org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:804)

        at org.apache.hadoop.dfs.NameNode.main(NameNode.java:813)

2007-12-17 11:14:00,376 INFO org.apache.hadoop.dfs.NameNode:
SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoopmaster/192.168.0.129

************************************************************/

 


RE: Namenode gone bad! [0.15.1]

Posted by Peter Thygesen <th...@infopaq.dk>.
Found a file "edits.new" under {$WHERE_I_KEEP_HADOOP}/dfs/name/current
After moving the file I was able to start the namenode again. 

-Peter

-----Original Message-----
From: Peter Thygesen 
Sent: 17. december 2007 11:53
To: hadoop-user@lucene.apache.org
Subject: Namenode gone bad! [0.15.1]

I was trying to get HBase to work, when I noticed that the regionservers
started to fail. The dfs namenode ran out of disk space. I stoped all
hbase, dfs services and quickly made some space by deleting a lot of
logs files. When I restarted the dfs I got the following error message.

 

How do I recover? Or can I correct the problem?

 

Help..

 

Thanks,

Peter

 

I'm running version 0.15.1

 

 

2007-12-17 11:13:48,449 INFO org.apache.hadoop.dfs.NameNode:
STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = hadoopmaster/192.168.0.129

STARTUP_MSG:   args = []

************************************************************/

2007-12-17 11:13:48,920 INFO org.apache.hadoop.dfs.NameNode: Namenode up
at: hadoopmaster/192.168.0.129:54310

2007-12-17 11:13:48,931 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null

2007-12-17 11:14:00,367 INFO org.apache.hadoop.ipc.Server: Stopping
server on 54310

2007-12-17 11:14:00,373 ERROR org.apache.hadoop.dfs.NameNode:
java.io.EOFException

        at java.io.DataInputStream.readFully(DataInputStream.java:180)

        at org.apache.hadoop.io.UTF8.readFields(UTF8.java:106)

        at
org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java:90)

        at
org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:544)

        at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:736)

        at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:620)

        at
org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:222)

        at
org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:76)

        at
org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:221)

        at org.apache.hadoop.dfs.NameNode.init(NameNode.java:130)

        at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:168)

        at
org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:804)

        at org.apache.hadoop.dfs.NameNode.main(NameNode.java:813)

2007-12-17 11:14:00,376 INFO org.apache.hadoop.dfs.NameNode:
SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoopmaster/192.168.0.129

************************************************************/