You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Jason Bourne <pr...@gmail.com> on 2011/05/23 14:10:29 UTC

Re:Bulk upload MapReduce prgram in Hbase-0.89

Hi all, 

 

I am having the cloudera hadoop CDH3 - pseudo distributed environment. I am
trying to upload the bulk of data into the hbase-0.89  using the Map Reduce
program. I am not interested in the command line tools in hbase (importtsv &
completebulkload). I got the Sample Map Reduce program from the Hbase-0.20.6
API document. 

 

When I executed the Map Reduce program using the Hadoop, I am getting the
following exception. 

 

java.io.IOException: table is null

        at BulkImport$InnerMap.map(BulkImport.java:44)

        at BulkImport$InnerMap.map(BulkImport.java:1)

        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)

        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:383)

        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:317)

        at ..........

...

 

attempt_201102010535_0002_m_000000_0:
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying
to locate root region

attempt_201102010535_0002_m_000000_0:   at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootReg
ion(HConnectionManager.java:1089)

attempt_201102010535_0002_m_000000_0:   at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(
HConnectionManager.java:668)...

.....

,...

.

 

Guess I am missing the some of the settings in the Hbase or hadoop. 

 

Please suggest me regarding this problem. I am facing the same problem when
I executed the Importtsv tool. 

 

 

-Praba

 

 

 


RE: Re:Bulk upload MapReduce prgram in Hbase-0.89

Posted by "Buttler, David" <bu...@llnl.gov>.
First, there is no reason to not use  a more recent version of hbase. 0.89 was never intended to be used in production environments. Upgrade to 0.90 (either the cloudera version or the recently released 0.90.3)

Second, you are not really giving out enough information to diagnose the problem.  I would suggest starting over from scratch and carefully documenting each step.  

In my recent work I have had to deal with a pseudo distributed system:
Start name node
Start data node
Start zookeeper
Start hbase master
Start regionserver

If you start them in separate windows from the command line, you can watch the errors printed out which is very important.

Create hbase table
Run local map reduce job (since it is pseudo distributed no need to start up job tracker and task tracker)

Now, where are the errors coming from? Which process?  What operating system?

Dave

-----Original Message-----
From: Jason Bourne [mailto:prabaster@gmail.com] 
Sent: Monday, May 23, 2011 5:10 AM
To: user@hbase.apache.org
Subject: Re:Bulk upload MapReduce prgram in Hbase-0.89

Hi all, 

 

I am having the cloudera hadoop CDH3 - pseudo distributed environment. I am
trying to upload the bulk of data into the hbase-0.89  using the Map Reduce
program. I am not interested in the command line tools in hbase (importtsv &
completebulkload). I got the Sample Map Reduce program from the Hbase-0.20.6
API document. 

 

When I executed the Map Reduce program using the Hadoop, I am getting the
following exception. 

 

java.io.IOException: table is null

        at BulkImport$InnerMap.map(BulkImport.java:44)

        at BulkImport$InnerMap.map(BulkImport.java:1)

        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)

        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:383)

        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:317)

        at ..........

...

 

attempt_201102010535_0002_m_000000_0:
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying
to locate root region

attempt_201102010535_0002_m_000000_0:   at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootReg
ion(HConnectionManager.java:1089)

attempt_201102010535_0002_m_000000_0:   at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(
HConnectionManager.java:668)...

.....

,...

.

 

Guess I am missing the some of the settings in the Hbase or hadoop. 

 

Please suggest me regarding this problem. I am facing the same problem when
I executed the Importtsv tool. 

 

 

-Praba