You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by mike anderson <sa...@gmail.com> on 2010/03/04 21:19:03 UTC

starting hbase after losing data on hadoop namenode

Yesterday my namenode went down because of  a lack of hard disk space.
I was able to get the namenode started again by removing the edits.new
file and sacrificing some of the data. However, I believe Hbase still
thinks this data exists, as is evident from these types of entries in
the master's log when I start it up:

2010-03-04 15:13:41,477 INFO org.apache.hadoop.hdfs.DFSClient: Could
not obtain block blk_-4089156066204357637_60829 from any node:
java.io.IOException: No live nodes contain current block

Which tools should I use to fix these problems? compact? Or will hbase
fix itself?

Thanks,
Mike

Re: starting hbase after losing data on hadoop namenode

Posted by mike anderson <sa...@gmail.com>.
HDFS reported back some corrupt blocks, so I ran fsck -delete, that
brought the filesystem back to healthy and restarting hbase seems to
come up with no complaints. Should I do a compact just to make sure
things are settled?

Re: starting hbase after losing data on hadoop namenode

Posted by Stack <st...@duboce.net>.
If you run fsck on your hdfs, whats it say?
St.Ack

On Thu, Mar 4, 2010 at 12:19 PM, mike anderson <sa...@gmail.com> wrote:
> Yesterday my namenode went down because of  a lack of hard disk space.
> I was able to get the namenode started again by removing the edits.new
> file and sacrificing some of the data. However, I believe Hbase still
> thinks this data exists, as is evident from these types of entries in
> the master's log when I start it up:
>
> 2010-03-04 15:13:41,477 INFO org.apache.hadoop.hdfs.DFSClient: Could
> not obtain block blk_-4089156066204357637_60829 from any node:
> java.io.IOException: No live nodes contain current block
>
> Which tools should I use to fix these problems? compact? Or will hbase
> fix itself?
>
> Thanks,
> Mike
>