You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by praba karan <pr...@gmail.com> on 2011/02/17 11:15:24 UTC

Re: Preparing Hbase for the Bulk Load

Hi all,

I ve been trying to load the Hbase with huge amount of data into the Hbase
using the Map Reduce program. Hbase table contains the 16 columns and row Id
are generated by the UUID's. When I try to load, It takes a time and gives
the exception as discussed in the following link.

http://web.archiveorange.com/archive/v/gMxNALiU1zbHXVoaJzOT


My job processed till 45 % of map, after that job failed and returned
exception as ''NoServerforRegionException, Connection Refused"

After that, Hbase shell stopped working. I tried restarting the cluster.
When I tried to disable and to drop the table. It produces the Following
exception


"ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it took
too long to wait for the table Sample to be disabled."


How to recover my Hbase-0.89 and is there any procedure to prepare the Hbase
for the Bulk Upload. My data contains the Rows in millions!



Regards
Jason

Re: Preparing Hbase for the Bulk Load

Posted by praba karan <pr...@gmail.com>.
Stack,

I had attached the hbase log file..













On Mon, Feb 28, 2011 at 1:30 AM, praba karan <pr...@gmail.com> wrote:

> Stack, Sorry. I was sidelined by other works.
>
> It's a sixteen columns. I ve pseudo-distributed mode and I was using it for
> developing model to implement it in the big clusters. Yes, I am not using
> the Bulk Loader. I am using the MapReduce program to upload the bulk Load.
> Used the following code from the link below
>
> http://wiki.apache.org/hadoop/Hbase/MapReduce
>
> I did modified according to the hbase-0.89 version and I did uploaded to
> the Hbase-0.89. But after uploading the Sample data of size around 3GB to
> my pseudo distributed machine, Hbase works fine until I restarts. After I
> restarts the machine.  Hbase is not working. It says as MasterNotRunning
> exception.
>
> The size of rows is around close to the 24 millions
>
> Version of hadoop is 0.20
>
>
> Regards
> Jason
>
>
>
> On Thu, Feb 17, 2011 at 10:56 PM, Stack <st...@duboce.net> wrote:
>
>> On Thu, Feb 17, 2011 at 2:15 AM, praba karan <pr...@gmail.com> wrote:
>> > Hi all,
>> >
>> > I ve been trying to load the Hbase with huge amount of data into the
>> Hbase
>> > using the Map Reduce program. Hbase table contains the 16 columns and
>> row Id
>> > are generated by the UUID's.
>>
>> Is that 16 columns or 16 column families?
>>
>> When you say huge, what sizes are you talking?
>>
>> Whats your cluster size?
>>
>> You are not using the bulk loader?
>>
>> How many mappers do you have running on each machine?
>>
>>
>> > When I try to load, It takes a time and gives
>> > the exception as discussed in the following link.
>> >
>> > http://web.archiveorange.com/archive/v/gMxNALiU1zbHXVoaJzOT
>> >
>>
>> That exception is pretty generic.
>>
>> > After that, Hbase shell stopped working. I tried restarting the cluster.
>> > When I tried to disable and to drop the table. It produces the Following
>> > exception
>> >
>> >
>> > "ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it
>> took
>> > too long to wait for the table Sample to be disabled."
>> >
>>
>> >
>> > How to recover my Hbase-0.89 and is there any procedure to prepare the
>> Hbase
>> > for the Bulk Upload. My data contains the Rows in millions!
>> >
>>
>> What size are these rows?
>>
>> Please update to 0.90  hbase.  What version of hadoop?
>>
>> St.Ack
>>
>
>

Re: Preparing Hbase for the Bulk Load

Posted by praba karan <pr...@gmail.com>.
Stack, Sorry. I was sidelined by other works.

It's a sixteen columns. I ve pseudo-distributed mode and I was using it for
developing model to implement it in the big clusters. Yes, I am not using
the Bulk Loader. I am using the MapReduce program to upload the bulk Load.
Used the following code from the link below

http://wiki.apache.org/hadoop/Hbase/MapReduce

I did modified according to the hbase-0.89 version and I did uploaded to the
Hbase-0.89. But after uploading the Sample data of size around 3GB to
my pseudo distributed machine, Hbase works fine until I restarts. After I
restarts the machine.  Hbase is not working. It says as MasterNotRunning
exception.

The size of rows is around close to the 24 millions

Version of hadoop is 0.20


Regards
Jason



On Thu, Feb 17, 2011 at 10:56 PM, Stack <st...@duboce.net> wrote:

> On Thu, Feb 17, 2011 at 2:15 AM, praba karan <pr...@gmail.com> wrote:
> > Hi all,
> >
> > I ve been trying to load the Hbase with huge amount of data into the
> Hbase
> > using the Map Reduce program. Hbase table contains the 16 columns and row
> Id
> > are generated by the UUID's.
>
> Is that 16 columns or 16 column families?
>
> When you say huge, what sizes are you talking?
>
> Whats your cluster size?
>
> You are not using the bulk loader?
>
> How many mappers do you have running on each machine?
>
>
> > When I try to load, It takes a time and gives
> > the exception as discussed in the following link.
> >
> > http://web.archiveorange.com/archive/v/gMxNALiU1zbHXVoaJzOT
> >
>
> That exception is pretty generic.
>
> > After that, Hbase shell stopped working. I tried restarting the cluster.
> > When I tried to disable and to drop the table. It produces the Following
> > exception
> >
> >
> > "ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it
> took
> > too long to wait for the table Sample to be disabled."
> >
>
> >
> > How to recover my Hbase-0.89 and is there any procedure to prepare the
> Hbase
> > for the Bulk Upload. My data contains the Rows in millions!
> >
>
> What size are these rows?
>
> Please update to 0.90  hbase.  What version of hadoop?
>
> St.Ack
>

Re: Preparing Hbase for the Bulk Load

Posted by Stack <st...@duboce.net>.
On Thu, Feb 17, 2011 at 2:15 AM, praba karan <pr...@gmail.com> wrote:
> Hi all,
>
> I ve been trying to load the Hbase with huge amount of data into the Hbase
> using the Map Reduce program. Hbase table contains the 16 columns and row Id
> are generated by the UUID's.

Is that 16 columns or 16 column families?

When you say huge, what sizes are you talking?

Whats your cluster size?

You are not using the bulk loader?

How many mappers do you have running on each machine?


> When I try to load, It takes a time and gives
> the exception as discussed in the following link.
>
> http://web.archiveorange.com/archive/v/gMxNALiU1zbHXVoaJzOT
>

That exception is pretty generic.

> After that, Hbase shell stopped working. I tried restarting the cluster.
> When I tried to disable and to drop the table. It produces the Following
> exception
>
>
> "ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it took
> too long to wait for the table Sample to be disabled."
>

>
> How to recover my Hbase-0.89 and is there any procedure to prepare the Hbase
> for the Bulk Upload. My data contains the Rows in millions!
>

What size are these rows?

Please update to 0.90  hbase.  What version of hadoop?

St.Ack