You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Deepak Sharma (JIRA)" <ji...@apache.org> on 2014/05/03 23:47:15 UTC
[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working
properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Deepak Sharma updated HBASE-10933:
----------------------------------
Assignee: Y. SREENIVASULU REDDY (was: Deepak Sharma)
> hbck -fixHdfsOrphans is not working properly it throws null pointer exception
> -----------------------------------------------------------------------------
>
> Key: HBASE-10933
> URL: https://issues.apache.org/jira/browse/HBASE-10933
> Project: HBase
> Issue Type: Bug
> Components: hbck
> Affects Versions: 0.94.16, 0.98.2
> Reporter: Deepak Sharma
> Assignee: Y. SREENIVASULU REDDY
> Priority: Critical
>
> if we regioninfo file is not existing in hbase region then if we run hbck repair or hbck -fixHdfsOrphans
> then it is not able to resolve this problem it throws null pointer exception
> {code}
> 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs dir: hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
> at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
> at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
> at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
> at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
> at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
> at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
> at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
> at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
> at junit.framework.TestCase.runBare(TestCase.java:132)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:243)
> at junit.framework.TestSuite.run(TestSuite.java:238)
> at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}
> problem i got it is because since in HbaseFsck class
> {code}
> private void adoptHdfsOrphan(HbckInfo hi)
> {code}
> we are intializing tableinfo using SortedMap<String, TableInfo> tablesInfo object
> {code}
> TableInfo tableInfo = tablesInfo.get(tableName);
> {code}
> but in private SortedMap<String, TableInfo> loadHdfsRegionInfos()
> {code}
> for (HbckInfo hbi: hbckInfos) {
> if (hbi.getHdfsHRI() == null) {
> // was an orphan
> continue;
> }
> {code}
> we have check if a region is orphan then that table will can not be added in SortedMap<String, TableInfo> tablesInfo
> so later while using this we get null pointer exception
--
This message was sent by Atlassian JIRA
(v6.2#6252)