You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by "Buttler, David" <bu...@llnl.gov> on 2010/11/02 16:14:58 UTC

RE: Setting the heap size

For setting the memory on the zookeeper node, I think you can simply use the heap size for hbase, if you are using hbase to manage zookeeper.  I don't think you will need more than 1G, depending on what else you are using it for.

Dave

-----Original Message-----
From: Tim Robertson [mailto:timrobertson100@gmail.com] 
Sent: Friday, October 29, 2010 7:21 AM
To: user@hbase.apache.org
Subject: Re: Setting the heap size

Hi Sean,

Based on the HBase user recommendations:
http://BLOCKEDsearch.gmane.org/?query=Advice+sought+for+mixed+hardware+installation&group=gmane.comp.java.hadoop.hbase.user

It's a mixed hardware configuration.  In truth, we will likely run 1
mapper on each DN to make the most of data locality.
The 3 TT nodes are hefty dual quad with hyper threading and 24G, but
the 9 RS are only single quad and 8G

Cheers,
Tim



On Fri, Oct 29, 2010 at 4:11 PM, Sean Bigdatafun
<se...@gmail.com> wrote:
> Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
> any benefit of doing that.
>
> Sean
>
> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson <ti...@gmail.com>wrote:
>
>> Hi all,
>>
>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>> installing zookeeper to perform the HBase coordination and to manage a
>> few simple process locks for other tasks we run.
>>
>> Could someone please advise what kind on heap we should give to our
>> single ZK node and also (ahem) how does one actually set this? It's
>> not immediately obvious in the docs or config.
>>
>> Thanks,
>> Tim
>>
>