You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by jeba earnest <je...@yahoo.com> on 2014/02/28 16:14:55 UTC

Inconsistency in Hbase table[Region not deployed on any region server]

In a hbase cluster, all the slave nodes got restarted. When I started Hbase services,
one of the table(test) become inconsistent.

HDFS some blocks were missing(hbase blocks). so it was in safe mode. I gave "safemode -leave"
Then hbase table (test)became inconsistent.

Below actions I performed


1. executed "hbase hbck" several times  
2 inconsistencies found for table "test"

2. hbase hbck -fixMeta -fixAssignments   
HBaseFsckRepair: Region still in transition, waiting for it to become assigned: {NAME = 'test,1m\x00\x03\x1B\x15,1393439284371.4c213a47bba83c47075f21fec7c6d862.', STARTKEY = '1m\x00\x03\x1B\x15', ENDKEY = '', ENCODED = 4c213a47bba83c47075f21fec7c6d862,}

3. hbase hbck -repair    
HBaseFsckRepair: Region still in transition, waiting for it to become assigned: {NAME = 'test,1m\x00\x03\x1B\x15,1393439284371.4c213a47bba83c47075f21fec7c6d862.', STARTKEY = '1m\x00\x03\x1B\x15', ENDKEY = '', ENCODED = 4c213a47bba83c47075f21fec7c6d862,}

4. I checked datanode logs parallely  
Log: org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock BP-1015188871-192.168.1.11-1391187113543:blk_7616957984716737802_27846 received exception java.io.EOFException   
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.12, storageID=DS-831971799-192.168.1.12-50010-1391193910800, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-7f99a9de-258c-493c-9db0-46b9e84b4c12;nsid=1286773982;c=0):Got exception while serving BP-1015188871-192.168.1.11-1391187113543:blk_7616957984716737802_27846 to /192.168.1.12:36127


5. Checked Namenode logs   
ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /hbase/test/4c213a47bba83c47075f21fec7c6d862/C
2014-02-28 14:13:15,738 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 10.10.242.31:42149: error: java.io.FileNotFoundException: File does not exist: /hbase/test/4c213a47bba83c47075f21fec7c6d862/C
java.io.FileNotFoundException: File does not exist: /hbase/test/4c213a47bba83c47075f21fec7c6d862/C at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1301) 

But, I am able to browse and download the file from HDFS. How can recover the data?
How can I make the "test" table consistent.


Regards,

Jeba

Re: Inconsistency in Hbase table[Region not deployed on any region server]

Posted by Ted Yu <yu...@gmail.com>.
Which HBase / hadoop releases are you using ?

See
http://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hadoop-hdfs


On Fri, Feb 28, 2014 at 7:14 AM, jeba earnest <je...@yahoo.com> wrote:

> In a hbase cluster, all the slave nodes got restarted. When I started
> Hbase services,
> one of the table(test) become inconsistent.
>
> HDFS some blocks were missing(hbase blocks). so it was in safe mode. I
> gave "safemode -leave"
> Then hbase table (test)became inconsistent.
>
> Below actions I performed
>
>
> 1. executed "hbase hbck" several times
> 2 inconsistencies found for table "test"
>
> 2. hbase hbck -fixMeta -fixAssignments
> HBaseFsckRepair: Region still in transition, waiting for it to become
> assigned: {NAME =
> 'test,1m\x00\x03\x1B\x15,1393439284371.4c213a47bba83c47075f21fec7c6d862.',
> STARTKEY = '1m\x00\x03\x1B\x15', ENDKEY = '', ENCODED =
> 4c213a47bba83c47075f21fec7c6d862,}
>
> 3. hbase hbck -repair
> HBaseFsckRepair: Region still in transition, waiting for it to become
> assigned: {NAME =
> 'test,1m\x00\x03\x1B\x15,1393439284371.4c213a47bba83c47075f21fec7c6d862.',
> STARTKEY = '1m\x00\x03\x1B\x15', ENDKEY = '', ENCODED =
> 4c213a47bba83c47075f21fec7c6d862,}
>
> 4. I checked datanode logs parallely
> Log: org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1015188871-192.168.1.11-1391187113543:blk_7616957984716737802_27846
> received exception java.io.EOFException
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.1.12,
> storageID=DS-831971799-192.168.1.12-50010-1391193910800, infoPort=50075,
> ipcPort=50020,
> storageInfo=lv=-40;cid=CID-7f99a9de-258c-493c-9db0-46b9e84b4c12;nsid=1286773982;c=0):Got
> exception while serving
> BP-1015188871-192.168.1.11-1391187113543:blk_7616957984716737802_27846 to /
> 192.168.1.12:36127
>
>
> 5. Checked Namenode logs
> ERROR org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:ubuntu (auth:SIMPLE)
> cause:java.io.FileNotFoundException: File does not exist:
> /hbase/test/4c213a47bba83c47075f21fec7c6d862/C
> 2014-02-28 14:13:15,738 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 9000, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from
> 10.10.242.31:42149: error: java.io.FileNotFoundException: File does not
> exist: /hbase/test/4c213a47bba83c47075f21fec7c6d862/C
> java.io.FileNotFoundException: File does not exist:
> /hbase/test/4c213a47bba83c47075f21fec7c6d862/C at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1301)
>
> But, I am able to browse and download the file from HDFS. How can recover
> the data?
> How can I make the "test" table consistent.
>
>
> Regards,
>
> Jeba