You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by jeevi tesh <je...@gmail.com> on 2014/10/30 11:56:06 UTC

Hbase is crashing need help

Hi,

I'm using hbase0.94.3, hadoop-2.2.0 and jdk 1.7.71, single node machine(not
yet made cluster),oracle linux.


Table size of hbase is nearly 2GB ( I checked files under the property
hbase.rootdir)


Now wanted to count the number of rows of data in the above mentioned table.

So i Used this below command

count '<tablename>', CACHE => 1000


It started giving me error

ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3
retries.


After this one very wired things happen in the network all the nodes start
throwing this same above errors.


If a node is already started (I mean I used start-hbase.sh command ). I
won’t have above any issues.


Note: I’m on single node cluster even then how it is affecting the entire
network.? Is it because zookeeper has awareness of the cluster.


I feel like I’m missing some configuration please help me in resolving the
same.


Thanks

Re: Hbase is crashing need help

Posted by Talat Uyarer <ta...@uyarer.com>.
Hi Jeevi,

First of all Please send your email to related mail-list. Set CACHE
lower if your rows are big. Default is to fetch one row at a time.
Count the number of rows in a table. This operation may take a long
time. It is not mapreduce. You can run RowCounter that is a mapreduce
job to count all the rows of a table.[1]

bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename>
[<column1> <column2>...]

Talat

[1] http://hbase.apache.org/book/ops_mgt.html#rowcounter

2014-10-30 12:56 GMT+02:00 jeevi tesh <je...@gmail.com>:
> Hi,
>
> I'm using hbase0.94.3, hadoop-2.2.0 and jdk 1.7.71, single node machine(not
> yet made cluster),oracle linux.
>
>
> Table size of hbase is nearly 2GB ( I checked files under the property
> hbase.rootdir)
>
>
> Now wanted to count the number of rows of data in the above mentioned table.
>
> So i Used this below command
>
> count '<tablename>', CACHE => 1000
>
>
> It started giving me error
>
> ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3
> retries.
>
>
> After this one very wired things happen in the network all the nodes start
> throwing this same above errors.
>
>
> If a node is already started (I mean I used start-hbase.sh command ). I
> won’t have above any issues.
>
>
> Note: I’m on single node cluster even then how it is affecting the entire
> network.? Is it because zookeeper has awareness of the cluster.
>
>
> I feel like I’m missing some configuration please help me in resolving the
> same.
>
>
> Thanks



-- 
Talat UYARER
Websitesi: http://talat.uyarer.com
Twitter: http://twitter.com/talatuyarer
Linkedin: http://tr.linkedin.com/pub/talat-uyarer/10/142/304