You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Felix Sprick <fs...@gmail.com> on 2011/04/21 13:54:01 UTC

Impacting the Hbase load balancer

Hi all,

We are using HBase 0.90 with cloudera distribution for HDFS (cdh3b3).
We have a setup with 4 regionservers and dozens of clients writing 1KB
data 50 times per second into hbase. What we want to achieve is
distributing the load to all of the regionservers, so that clients
write to all regionservers. Question is: can we impact which region
server the data is written to by using an appropriate rowkey? Or what
other way is there to involve the max number of regionservers? Our
rowkey basically consists of a timestamp but we were planning to salt
it by adding a client ID or some other information to it.

thanks,
Felix

RE: Impacting the Hbase load balancer

Posted by Michael Segel <mi...@hotmail.com>.
Felix,
You're going to want to upgrade to CDH3u0 for two main reasons....

1) There was a bug in the HBase Load Balancer that was fixed in 90.2 but Todd said that they were going to back port it to 90.1
2) There's another bug in the WAL that is fixed in 0.90 (CDH3B4)

0.89 is much better than 20.3 but 90.1 w the patches is better than 0.89.

HTH

-Mike


> Date: Thu, 21 Apr 2011 13:54:01 +0200
> Subject: Impacting the Hbase load balancer
> From: fsprick@gmail.com
> To: user@hbase.apache.org
> 
> Hi all,
> 
> We are using HBase 0.90 with cloudera distribution for HDFS (cdh3b3).
> We have a setup with 4 regionservers and dozens of clients writing 1KB
> data 50 times per second into hbase. What we want to achieve is
> distributing the load to all of the regionservers, so that clients
> write to all regionservers. Question is: can we impact which region
> server the data is written to by using an appropriate rowkey? Or what
> other way is there to involve the max number of regionservers? Our
> rowkey basically consists of a timestamp but we were planning to salt
> it by adding a client ID or some other information to it.
> 
> thanks,
> Felix