You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@zookeeper.apache.org by Tim Robertson <ti...@gmail.com> on 2010/10/28 11:52:31 UTC

Setting the heap size

Hi all,

We are setting up a small Hadoop 13 node cluster running 1 HDFS
master, 9 region severs for HBase and 3 map reduce nodes, and are just
installing zookeeper to perform the HBase coordination and to manage a
few simple process locks for other tasks we run.

Could someone please advise what kind on heap we should give to our
single ZK node and also (ahem) how does one actually set this? It's
not immediately obvious in the docs or config.

Thanks,
Tim

RE: Setting the heap size

Posted by "Buttler, David" <bu...@llnl.gov>.
For setting the memory on the zookeeper node, I think you can simply use the heap size for hbase, if you are using hbase to manage zookeeper.  I don't think you will need more than 1G, depending on what else you are using it for.

Dave

-----Original Message-----
From: Tim Robertson [mailto:timrobertson100@gmail.com] 
Sent: Friday, October 29, 2010 7:21 AM
To: user@hbase.apache.org
Subject: Re: Setting the heap size

Hi Sean,

Based on the HBase user recommendations:
http://BLOCKEDsearch.gmane.org/?query=Advice+sought+for+mixed+hardware+installation&group=gmane.comp.java.hadoop.hbase.user

It's a mixed hardware configuration.  In truth, we will likely run 1
mapper on each DN to make the most of data locality.
The 3 TT nodes are hefty dual quad with hyper threading and 24G, but
the 9 RS are only single quad and 8G

Cheers,
Tim



On Fri, Oct 29, 2010 at 4:11 PM, Sean Bigdatafun
<se...@gmail.com> wrote:
> Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
> any benefit of doing that.
>
> Sean
>
> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson <ti...@gmail.com>wrote:
>
>> Hi all,
>>
>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>> installing zookeeper to perform the HBase coordination and to manage a
>> few simple process locks for other tasks we run.
>>
>> Could someone please advise what kind on heap we should give to our
>> single ZK node and also (ahem) how does one actually set this? It's
>> not immediately obvious in the docs or config.
>>
>> Thanks,
>> Tim
>>
>


Re: Setting the heap size

Posted by Tim Robertson <ti...@gmail.com>.
Hi Sean,

Based on the HBase user recommendations:
http://search.gmane.org/?query=Advice+sought+for+mixed+hardware+installation&group=gmane.comp.java.hadoop.hbase.user

It's a mixed hardware configuration.  In truth, we will likely run 1
mapper on each DN to make the most of data locality.
The 3 TT nodes are hefty dual quad with hyper threading and 24G, but
the 9 RS are only single quad and 8G

Cheers,
Tim



On Fri, Oct 29, 2010 at 4:11 PM, Sean Bigdatafun
<se...@gmail.com> wrote:
> Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
> any benefit of doing that.
>
> Sean
>
> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson <ti...@gmail.com>wrote:
>
>> Hi all,
>>
>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>> installing zookeeper to perform the HBase coordination and to manage a
>> few simple process locks for other tasks we run.
>>
>> Could someone please advise what kind on heap we should give to our
>> single ZK node and also (ahem) how does one actually set this? It's
>> not immediately obvious in the docs or config.
>>
>> Thanks,
>> Tim
>>
>

Re: Setting the heap size

Posted by Sean Bigdatafun <se...@gmail.com>.
Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
any benefit of doing that.

Sean

On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson <ti...@gmail.com>wrote:

> Hi all,
>
> We are setting up a small Hadoop 13 node cluster running 1 HDFS
> master, 9 region severs for HBase and 3 map reduce nodes, and are just
> installing zookeeper to perform the HBase coordination and to manage a
> few simple process locks for other tasks we run.
>
> Could someone please advise what kind on heap we should give to our
> single ZK node and also (ahem) how does one actually set this? It's
> not immediately obvious in the docs or config.
>
> Thanks,
> Tim
>

Re: Setting the heap size

Posted by Sean Bigdatafun <se...@gmail.com>.
Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
any benefit of doing that.

Sean

On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson <ti...@gmail.com>wrote:

> Hi all,
>
> We are setting up a small Hadoop 13 node cluster running 1 HDFS
> master, 9 region severs for HBase and 3 map reduce nodes, and are just
> installing zookeeper to perform the HBase coordination and to manage a
> few simple process locks for other tasks we run.
>
> Could someone please advise what kind on heap we should give to our
> single ZK node and also (ahem) how does one actually set this? It's
> not immediately obvious in the docs or config.
>
> Thanks,
> Tim
>

Re: Setting the heap size

Posted by Patrick Hunt <ph...@apache.org>.
Actually if you are going to admin your own ZK it's probably a good
idea to review that Admin doc fully. Some other good detail in there
(backups and cleaning the datadir for example).

Regards,

Patrick

On Fri, Oct 29, 2010 at 7:22 AM, Tim Robertson
<ti...@gmail.com> wrote:
> Great - thanks Patrick!
>
>
> On Thu, Oct 28, 2010 at 6:13 PM, Patrick Hunt <ph...@apache.org> wrote:
>> Tim, one other thing you might want to be aware of:
>> http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision
>>
>> Patrick
>>
>> On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt <ph...@apache.org> wrote:
>>> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>>> <ti...@gmail.com> wrote:
>>>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>>>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>>>> installing zookeeper to perform the HBase coordination and to manage a
>>>> few simple process locks for other tasks we run.
>>>>
>>>> Could someone please advise what kind on heap we should give to our
>>>> single ZK node and also (ahem) how does one actually set this? It's
>>>> not immediately obvious in the docs or config.
>>>
>>> The amount of heap necessary will be dependent on the application(s)
>>> using ZK, also configuration of the heap is dependent on what
>>> packaging you are using to start ZK.
>>>
>>> Are you using zkServer.sh from our distribution? If so then you
>>> probably want to set JVMFLAGS env variable. We pass this through to
>>> the jvm, see -Xmx in the man page
>>> (http://www.manpagez.com/man/1/java/)
>>>
>>> Given this is Hbase (which I'm reasonably familiar with) the default
>>> heap should be fine. However you might want to check with the Hbase
>>> team on that.
>>>
>>> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
>>> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>>>
>>> Regards,
>>>
>>> Patrick
>>>
>>
>

Re: Setting the heap size

Posted by Patrick Hunt <ph...@gmail.com>.
Actually if you are going to admin your own ZK it's probably a good
idea to review that Admin doc fully. Some other good detail in there
(backups and cleaning the datadir for example).

Regards,

Patrick

On Fri, Oct 29, 2010 at 7:22 AM, Tim Robertson
<ti...@gmail.com> wrote:
> Great - thanks Patrick!
>
>
> On Thu, Oct 28, 2010 at 6:13 PM, Patrick Hunt <ph...@apache.org> wrote:
>> Tim, one other thing you might want to be aware of:
>> http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision
>>
>> Patrick
>>
>> On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt <ph...@apache.org> wrote:
>>> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>>> <ti...@gmail.com> wrote:
>>>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>>>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>>>> installing zookeeper to perform the HBase coordination and to manage a
>>>> few simple process locks for other tasks we run.
>>>>
>>>> Could someone please advise what kind on heap we should give to our
>>>> single ZK node and also (ahem) how does one actually set this? It's
>>>> not immediately obvious in the docs or config.
>>>
>>> The amount of heap necessary will be dependent on the application(s)
>>> using ZK, also configuration of the heap is dependent on what
>>> packaging you are using to start ZK.
>>>
>>> Are you using zkServer.sh from our distribution? If so then you
>>> probably want to set JVMFLAGS env variable. We pass this through to
>>> the jvm, see -Xmx in the man page
>>> (http://www.manpagez.com/man/1/java/)
>>>
>>> Given this is Hbase (which I'm reasonably familiar with) the default
>>> heap should be fine. However you might want to check with the Hbase
>>> team on that.
>>>
>>> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
>>> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>>>
>>> Regards,
>>>
>>> Patrick
>>>
>>
>

Re: Setting the heap size

Posted by Tim Robertson <ti...@gmail.com>.
Great - thanks Patrick!


On Thu, Oct 28, 2010 at 6:13 PM, Patrick Hunt <ph...@apache.org> wrote:
> Tim, one other thing you might want to be aware of:
> http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision
>
> Patrick
>
> On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt <ph...@apache.org> wrote:
>> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>> <ti...@gmail.com> wrote:
>>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>>> installing zookeeper to perform the HBase coordination and to manage a
>>> few simple process locks for other tasks we run.
>>>
>>> Could someone please advise what kind on heap we should give to our
>>> single ZK node and also (ahem) how does one actually set this? It's
>>> not immediately obvious in the docs or config.
>>
>> The amount of heap necessary will be dependent on the application(s)
>> using ZK, also configuration of the heap is dependent on what
>> packaging you are using to start ZK.
>>
>> Are you using zkServer.sh from our distribution? If so then you
>> probably want to set JVMFLAGS env variable. We pass this through to
>> the jvm, see -Xmx in the man page
>> (http://www.manpagez.com/man/1/java/)
>>
>> Given this is Hbase (which I'm reasonably familiar with) the default
>> heap should be fine. However you might want to check with the Hbase
>> team on that.
>>
>> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
>> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>>
>> Regards,
>>
>> Patrick
>>
>

Re: Setting the heap size

Posted by Patrick Hunt <ph...@apache.org>.
Tim, one other thing you might want to be aware of:
http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision

Patrick

On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt <ph...@apache.org> wrote:
> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
> <ti...@gmail.com> wrote:
>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>> installing zookeeper to perform the HBase coordination and to manage a
>> few simple process locks for other tasks we run.
>>
>> Could someone please advise what kind on heap we should give to our
>> single ZK node and also (ahem) how does one actually set this? It's
>> not immediately obvious in the docs or config.
>
> The amount of heap necessary will be dependent on the application(s)
> using ZK, also configuration of the heap is dependent on what
> packaging you are using to start ZK.
>
> Are you using zkServer.sh from our distribution? If so then you
> probably want to set JVMFLAGS env variable. We pass this through to
> the jvm, see -Xmx in the man page
> (http://www.manpagez.com/man/1/java/)
>
> Given this is Hbase (which I'm reasonably familiar with) the default
> heap should be fine. However you might want to check with the Hbase
> team on that.
>
> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>
> Regards,
>
> Patrick
>

Re: Setting the heap size

Posted by Patrick Hunt <ph...@apache.org>.
On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
<ti...@gmail.com> wrote:
> We are setting up a small Hadoop 13 node cluster running 1 HDFS
> master, 9 region severs for HBase and 3 map reduce nodes, and are just
> installing zookeeper to perform the HBase coordination and to manage a
> few simple process locks for other tasks we run.
>
> Could someone please advise what kind on heap we should give to our
> single ZK node and also (ahem) how does one actually set this? It's
> not immediately obvious in the docs or config.

The amount of heap necessary will be dependent on the application(s)
using ZK, also configuration of the heap is dependent on what
packaging you are using to start ZK.

Are you using zkServer.sh from our distribution? If so then you
probably want to set JVMFLAGS env variable. We pass this through to
the jvm, see -Xmx in the man page
(http://www.manpagez.com/man/1/java/)

Given this is Hbase (which I'm reasonably familiar with) the default
heap should be fine. However you might want to check with the Hbase
team on that.

I'd also encourage you to enter a JIRA on the (lack of) doc issue you
highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER

Regards,

Patrick