You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@zookeeper.apache.org by Mike Schilli <m...@perlmeister.com> on 2012/02/09 02:23:12 UTC

ZooKeeper Memory Usage

We've got a ZooKeeper instance that's using about 5 GB of resident
memory. Every time we restart it, it starts at 200MB, and then grows
slowly until it is back at 5 GB.

The large footprint is related to how much data we've got in there.
What's interesting, though, is that the process size doesn't shrink if
we purge some of the data.

Now, this isn't a big problem, I'm just curious if the process will fall
over at some point if it can't get more memory or if it'll just make due
by caching less data.

Also, if I remember correctly, there's a configuration variable to set
the maximum size, what happens if ZK reaches that?

-- -- Mike

Mike Schilli
m@perlmeister.com

Re: ZooKeeper Memory Usage

Posted by César Álvarez Núñez <ce...@gmail.com>.
Which ZK version are you using?
We've got high memory consumption problems when using v3.4.0 that was
solved by using v3.3.3 instead.
It is pending to test with v3.4.2 and v3.3.4.

/César.


On Thu, Feb 9, 2012 at 2:23 AM, Mike Schilli <m...@perlmeister.com> wrote:

> We've got a ZooKeeper instance that's using about 5 GB of resident
> memory. Every time we restart it, it starts at 200MB, and then grows
> slowly until it is back at 5 GB.
>
> The large footprint is related to how much data we've got in there.
> What's interesting, though, is that the process size doesn't shrink if
> we purge some of the data.
>
> Now, this isn't a big problem, I'm just curious if the process will fall
> over at some point if it can't get more memory or if it'll just make due
> by caching less data.
>
> Also, if I remember correctly, there's a configuration variable to set
> the maximum size, what happens if ZK reaches that?
>
> -- -- Mike
>
> Mike Schilli
> m@perlmeister.com
>

Re: ZooKeeper Memory Usage

Posted by César Álvarez Núñez <ce...@gmail.com>.
The problem is that the stress test is a system stress test so it is not
possible (due to time issues) to isolate the ZK part.
Fortunately, it is planned for next week to perform the same system stress
test with 3.4.2 so I'll keep you informed.
/César.

On Thu, Feb 9, 2012 at 3:59 PM, Camille Fournier <ca...@apache.org> wrote:

> That is interesting. Can you send us the stress test so we can investigate?
> Also, could you possibly run it on the new RC to see if it's still a
> problem?
>
> Thanks,
> C
>
> 2012/2/9 César Álvarez Núñez <ce...@gmail.com>
>
> > In my case, our stress test show up a linear increase of "tenured memory"
> > from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
> > keeps "tenured memory" stable and < 10MiB.
> >
> > The stress test performs many zNodes creation and delete but the overall
> zk
> > usage at any moment in time was relative small.
> >
> > BR,
> > /César.
> >
> > On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org>
> > wrote:
> >
> > > This is really a question about how the jvm grows its heaps and resizes
> > > them. If the jvm cannot allocate enough memory for the process because
> > you
> > > didn't set the max memory high enough, it will fall over. Zookeeper
> keeps
> > > its entire state in memory for performance reasons, if it were to swap
> > that
> > > would be quite bad for performance.
> > >
> > > C
> > > On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
> > >
> > > > We've got a ZooKeeper instance that's using about 5 GB of resident
> > > > memory. Every time we restart it, it starts at 200MB, and then grows
> > > > slowly until it is back at 5 GB.
> > > >
> > > > The large footprint is related to how much data we've got in there.
> > > > What's interesting, though, is that the process size doesn't shrink
> if
> > > > we purge some of the data.
> > > >
> > > > Now, this isn't a big problem, I'm just curious if the process will
> > fall
> > > > over at some point if it can't get more memory or if it'll just make
> > due
> > > > by caching less data.
> > > >
> > > > Also, if I remember correctly, there's a configuration variable to
> set
> > > > the maximum size, what happens if ZK reaches that?
> > > >
> > > > -- -- Mike
> > > >
> > > > Mike Schilli
> > > > m@perlmeister.com
> > > >
> > >
> >
>

Re: ZooKeeper Memory Usage

Posted by Camille Fournier <ca...@apache.org>.
That is interesting. Can you send us the stress test so we can investigate?
Also, could you possibly run it on the new RC to see if it's still a
problem?

Thanks,
C

2012/2/9 César Álvarez Núñez <ce...@gmail.com>

> In my case, our stress test show up a linear increase of "tenured memory"
> from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
> keeps "tenured memory" stable and < 10MiB.
>
> The stress test performs many zNodes creation and delete but the overall zk
> usage at any moment in time was relative small.
>
> BR,
> /César.
>
> On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org>
> wrote:
>
> > This is really a question about how the jvm grows its heaps and resizes
> > them. If the jvm cannot allocate enough memory for the process because
> you
> > didn't set the max memory high enough, it will fall over. Zookeeper keeps
> > its entire state in memory for performance reasons, if it were to swap
> that
> > would be quite bad for performance.
> >
> > C
> > On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
> >
> > > We've got a ZooKeeper instance that's using about 5 GB of resident
> > > memory. Every time we restart it, it starts at 200MB, and then grows
> > > slowly until it is back at 5 GB.
> > >
> > > The large footprint is related to how much data we've got in there.
> > > What's interesting, though, is that the process size doesn't shrink if
> > > we purge some of the data.
> > >
> > > Now, this isn't a big problem, I'm just curious if the process will
> fall
> > > over at some point if it can't get more memory or if it'll just make
> due
> > > by caching less data.
> > >
> > > Also, if I remember correctly, there's a configuration variable to set
> > > the maximum size, what happens if ZK reaches that?
> > >
> > > -- -- Mike
> > >
> > > Mike Schilli
> > > m@perlmeister.com
> > >
> >
>

Re: ZooKeeper Memory Usage

Posted by Neha Narkhede <ne...@gmail.com>.
Was there ever a JIRA created for this issue ?

Thanks,
Neha

On Fri, Feb 10, 2012 at 9:31 AM, Mahadev Konar <ma...@hortonworks.com> wrote:
> Great. You should have the gc logs then. Mind creating a jira and
> uploading to it?
>
> mahadev
>
> 2012/2/10 César Álvarez Núñez <ce...@gmail.com>:
>> This is the java.env file content.
>>
>> now=`date +%d%m%Y_%H%M%S`
>>
>> gcLogFile="/srv/zk/GC/`hostname`-${now}.log"
>> gcOpts="${gcOpts} -Xloggc:$gcLogFile"
>> gcOpts="${gcOpts} -XX:+PrintGC"
>> gcOpts="${gcOpts} -XX:+PrintGCTimeStamps"
>> gcOpts="${gcOpts} -XX:+PrintGCDetails"
>> gcOpts="${gcOpts} -XX:+PrintTenuringDistribution"
>> gcOpts="${gcOpts} -XX:+PrintHeapAtGC"
>>
>> gcOpts="${gcOpts} -XX:+AggressiveHeap"
>> #https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
>> #gcOpts="${gcOpts} -XX:+UseConcMarkSweepGC"
>> #gcOpts="${gcOpts} -XX:ParallelGCThreads=8"
>> #gcOpts="-Xms128M -Xmx1G"
>>
>> jvmOpts="${jvmOpts} -d64"
>> jvmOpts="${jvmOpts} -server"
>> jvmOpts="${jvmOpts} -XX:+UseCompressedOops"
>>
>> JVMFLAGS="${gcOpts} ${jvmOpts}"
>>
>> /César.
>>
>> On Thu, Feb 9, 2012 at 7:17 PM, Mahadev Konar <ma...@hortonworks.com>wrote:
>>
>>> This is interesting and important.
>>>
>>> Cesar, what jvm options are you running with? Can you the options in:
>>>
>>> https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
>>>
>>> Atleast get the GC logs that we can look at?
>>>
>>> This will be very interesting.
>>>
>>> mahadev
>>>
>>>
>>> 2012/2/9 César Álvarez Núñez <ce...@gmail.com>:
>>> > In my case, our stress test show up a linear increase of "tenured memory"
>>> > from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
>>> > keeps "tenured memory" stable and < 10MiB.
>>> >
>>> > The stress test performs many zNodes creation and delete but the overall
>>> zk
>>> > usage at any moment in time was relative small.
>>> >
>>> > BR,
>>> > /César.
>>> >
>>> > On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org>
>>> wrote:
>>> >
>>> >> This is really a question about how the jvm grows its heaps and resizes
>>> >> them. If the jvm cannot allocate enough memory for the process because
>>> you
>>> >> didn't set the max memory high enough, it will fall over. Zookeeper
>>> keeps
>>> >> its entire state in memory for performance reasons, if it were to swap
>>> that
>>> >> would be quite bad for performance.
>>> >>
>>> >> C
>>> >> On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
>>> >>
>>> >> > We've got a ZooKeeper instance that's using about 5 GB of resident
>>> >> > memory. Every time we restart it, it starts at 200MB, and then grows
>>> >> > slowly until it is back at 5 GB.
>>> >> >
>>> >> > The large footprint is related to how much data we've got in there.
>>> >> > What's interesting, though, is that the process size doesn't shrink if
>>> >> > we purge some of the data.
>>> >> >
>>> >> > Now, this isn't a big problem, I'm just curious if the process will
>>> fall
>>> >> > over at some point if it can't get more memory or if it'll just make
>>> due
>>> >> > by caching less data.
>>> >> >
>>> >> > Also, if I remember correctly, there's a configuration variable to set
>>> >> > the maximum size, what happens if ZK reaches that?
>>> >> >
>>> >> > -- -- Mike
>>> >> >
>>> >> > Mike Schilli
>>> >> > m@perlmeister.com
>>> >> >
>>> >>
>>>
>>>
>>>
>>> --
>>> Mahadev Konar
>>> Hortonworks Inc.
>>> http://hortonworks.com/
>>>
>
>
>
> --
> Mahadev Konar
> Hortonworks Inc.
> http://hortonworks.com/

Re: ZooKeeper Memory Usage

Posted by Mahadev Konar <ma...@hortonworks.com>.
Great. You should have the gc logs then. Mind creating a jira and
uploading to it?

mahadev

2012/2/10 César Álvarez Núñez <ce...@gmail.com>:
> This is the java.env file content.
>
> now=`date +%d%m%Y_%H%M%S`
>
> gcLogFile="/srv/zk/GC/`hostname`-${now}.log"
> gcOpts="${gcOpts} -Xloggc:$gcLogFile"
> gcOpts="${gcOpts} -XX:+PrintGC"
> gcOpts="${gcOpts} -XX:+PrintGCTimeStamps"
> gcOpts="${gcOpts} -XX:+PrintGCDetails"
> gcOpts="${gcOpts} -XX:+PrintTenuringDistribution"
> gcOpts="${gcOpts} -XX:+PrintHeapAtGC"
>
> gcOpts="${gcOpts} -XX:+AggressiveHeap"
> #https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
> #gcOpts="${gcOpts} -XX:+UseConcMarkSweepGC"
> #gcOpts="${gcOpts} -XX:ParallelGCThreads=8"
> #gcOpts="-Xms128M -Xmx1G"
>
> jvmOpts="${jvmOpts} -d64"
> jvmOpts="${jvmOpts} -server"
> jvmOpts="${jvmOpts} -XX:+UseCompressedOops"
>
> JVMFLAGS="${gcOpts} ${jvmOpts}"
>
> /César.
>
> On Thu, Feb 9, 2012 at 7:17 PM, Mahadev Konar <ma...@hortonworks.com>wrote:
>
>> This is interesting and important.
>>
>> Cesar, what jvm options are you running with? Can you the options in:
>>
>> https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
>>
>> Atleast get the GC logs that we can look at?
>>
>> This will be very interesting.
>>
>> mahadev
>>
>>
>> 2012/2/9 César Álvarez Núñez <ce...@gmail.com>:
>> > In my case, our stress test show up a linear increase of "tenured memory"
>> > from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
>> > keeps "tenured memory" stable and < 10MiB.
>> >
>> > The stress test performs many zNodes creation and delete but the overall
>> zk
>> > usage at any moment in time was relative small.
>> >
>> > BR,
>> > /César.
>> >
>> > On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org>
>> wrote:
>> >
>> >> This is really a question about how the jvm grows its heaps and resizes
>> >> them. If the jvm cannot allocate enough memory for the process because
>> you
>> >> didn't set the max memory high enough, it will fall over. Zookeeper
>> keeps
>> >> its entire state in memory for performance reasons, if it were to swap
>> that
>> >> would be quite bad for performance.
>> >>
>> >> C
>> >> On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
>> >>
>> >> > We've got a ZooKeeper instance that's using about 5 GB of resident
>> >> > memory. Every time we restart it, it starts at 200MB, and then grows
>> >> > slowly until it is back at 5 GB.
>> >> >
>> >> > The large footprint is related to how much data we've got in there.
>> >> > What's interesting, though, is that the process size doesn't shrink if
>> >> > we purge some of the data.
>> >> >
>> >> > Now, this isn't a big problem, I'm just curious if the process will
>> fall
>> >> > over at some point if it can't get more memory or if it'll just make
>> due
>> >> > by caching less data.
>> >> >
>> >> > Also, if I remember correctly, there's a configuration variable to set
>> >> > the maximum size, what happens if ZK reaches that?
>> >> >
>> >> > -- -- Mike
>> >> >
>> >> > Mike Schilli
>> >> > m@perlmeister.com
>> >> >
>> >>
>>
>>
>>
>> --
>> Mahadev Konar
>> Hortonworks Inc.
>> http://hortonworks.com/
>>



-- 
Mahadev Konar
Hortonworks Inc.
http://hortonworks.com/

Re: ZooKeeper Memory Usage

Posted by César Álvarez Núñez <ce...@gmail.com>.
This is the java.env file content.

now=`date +%d%m%Y_%H%M%S`

gcLogFile="/srv/zk/GC/`hostname`-${now}.log"
gcOpts="${gcOpts} -Xloggc:$gcLogFile"
gcOpts="${gcOpts} -XX:+PrintGC"
gcOpts="${gcOpts} -XX:+PrintGCTimeStamps"
gcOpts="${gcOpts} -XX:+PrintGCDetails"
gcOpts="${gcOpts} -XX:+PrintTenuringDistribution"
gcOpts="${gcOpts} -XX:+PrintHeapAtGC"

gcOpts="${gcOpts} -XX:+AggressiveHeap"
#https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
#gcOpts="${gcOpts} -XX:+UseConcMarkSweepGC"
#gcOpts="${gcOpts} -XX:ParallelGCThreads=8"
#gcOpts="-Xms128M -Xmx1G"

jvmOpts="${jvmOpts} -d64"
jvmOpts="${jvmOpts} -server"
jvmOpts="${jvmOpts} -XX:+UseCompressedOops"

JVMFLAGS="${gcOpts} ${jvmOpts}"

/César.

On Thu, Feb 9, 2012 at 7:17 PM, Mahadev Konar <ma...@hortonworks.com>wrote:

> This is interesting and important.
>
> Cesar, what jvm options are you running with? Can you the options in:
>
> https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
>
> Atleast get the GC logs that we can look at?
>
> This will be very interesting.
>
> mahadev
>
>
> 2012/2/9 César Álvarez Núñez <ce...@gmail.com>:
> > In my case, our stress test show up a linear increase of "tenured memory"
> > from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
> > keeps "tenured memory" stable and < 10MiB.
> >
> > The stress test performs many zNodes creation and delete but the overall
> zk
> > usage at any moment in time was relative small.
> >
> > BR,
> > /César.
> >
> > On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org>
> wrote:
> >
> >> This is really a question about how the jvm grows its heaps and resizes
> >> them. If the jvm cannot allocate enough memory for the process because
> you
> >> didn't set the max memory high enough, it will fall over. Zookeeper
> keeps
> >> its entire state in memory for performance reasons, if it were to swap
> that
> >> would be quite bad for performance.
> >>
> >> C
> >> On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
> >>
> >> > We've got a ZooKeeper instance that's using about 5 GB of resident
> >> > memory. Every time we restart it, it starts at 200MB, and then grows
> >> > slowly until it is back at 5 GB.
> >> >
> >> > The large footprint is related to how much data we've got in there.
> >> > What's interesting, though, is that the process size doesn't shrink if
> >> > we purge some of the data.
> >> >
> >> > Now, this isn't a big problem, I'm just curious if the process will
> fall
> >> > over at some point if it can't get more memory or if it'll just make
> due
> >> > by caching less data.
> >> >
> >> > Also, if I remember correctly, there's a configuration variable to set
> >> > the maximum size, what happens if ZK reaches that?
> >> >
> >> > -- -- Mike
> >> >
> >> > Mike Schilli
> >> > m@perlmeister.com
> >> >
> >>
>
>
>
> --
> Mahadev Konar
> Hortonworks Inc.
> http://hortonworks.com/
>

Re: ZooKeeper Memory Usage

Posted by Ariel Weisberg <aw...@voltdb.com>.
Hi,

If you set -Xmx2g or so and also run with -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp you will get a heap dump. If you run (
http://www.eclipse.org/mat/) on it then ye olde pack rat will stick out
like a sore thumb.

Ariel

On Thu, Feb 9, 2012 at 1:17 PM, Mahadev Konar <ma...@hortonworks.com>wrote:

> This is interesting and important.
>
> Cesar, what jvm options are you running with? Can you the options in:
>
> https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting
>
> Atleast get the GC logs that we can look at?
>
> This will be very interesting.
>
> mahadev
>
>
> 2012/2/9 César Álvarez Núñez <ce...@gmail.com>:
> > In my case, our stress test show up a linear increase of "tenured memory"
> > from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
> > keeps "tenured memory" stable and < 10MiB.
> >
> > The stress test performs many zNodes creation and delete but the overall
> zk
> > usage at any moment in time was relative small.
> >
> > BR,
> > /César.
> >
> > On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org>
> wrote:
> >
> >> This is really a question about how the jvm grows its heaps and resizes
> >> them. If the jvm cannot allocate enough memory for the process because
> you
> >> didn't set the max memory high enough, it will fall over. Zookeeper
> keeps
> >> its entire state in memory for performance reasons, if it were to swap
> that
> >> would be quite bad for performance.
> >>
> >> C
> >> On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
> >>
> >> > We've got a ZooKeeper instance that's using about 5 GB of resident
> >> > memory. Every time we restart it, it starts at 200MB, and then grows
> >> > slowly until it is back at 5 GB.
> >> >
> >> > The large footprint is related to how much data we've got in there.
> >> > What's interesting, though, is that the process size doesn't shrink if
> >> > we purge some of the data.
> >> >
> >> > Now, this isn't a big problem, I'm just curious if the process will
> fall
> >> > over at some point if it can't get more memory or if it'll just make
> due
> >> > by caching less data.
> >> >
> >> > Also, if I remember correctly, there's a configuration variable to set
> >> > the maximum size, what happens if ZK reaches that?
> >> >
> >> > -- -- Mike
> >> >
> >> > Mike Schilli
> >> > m@perlmeister.com
> >> >
> >>
>
>
>
> --
> Mahadev Konar
> Hortonworks Inc.
> http://hortonworks.com/
>

Re: ZooKeeper Memory Usage

Posted by Mahadev Konar <ma...@hortonworks.com>.
This is interesting and important.

Cesar, what jvm options are you running with? Can you the options in:

https://cwiki.apache.org/confluence/display/ZOOKEEPER/Troubleshooting

Atleast get the GC logs that we can look at?

This will be very interesting.

mahadev


2012/2/9 César Álvarez Núñez <ce...@gmail.com>:
> In my case, our stress test show up a linear increase of "tenured memory"
> from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
> keeps "tenured memory" stable and < 10MiB.
>
> The stress test performs many zNodes creation and delete but the overall zk
> usage at any moment in time was relative small.
>
> BR,
> /César.
>
> On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org> wrote:
>
>> This is really a question about how the jvm grows its heaps and resizes
>> them. If the jvm cannot allocate enough memory for the process because you
>> didn't set the max memory high enough, it will fall over. Zookeeper keeps
>> its entire state in memory for performance reasons, if it were to swap that
>> would be quite bad for performance.
>>
>> C
>> On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
>>
>> > We've got a ZooKeeper instance that's using about 5 GB of resident
>> > memory. Every time we restart it, it starts at 200MB, and then grows
>> > slowly until it is back at 5 GB.
>> >
>> > The large footprint is related to how much data we've got in there.
>> > What's interesting, though, is that the process size doesn't shrink if
>> > we purge some of the data.
>> >
>> > Now, this isn't a big problem, I'm just curious if the process will fall
>> > over at some point if it can't get more memory or if it'll just make due
>> > by caching less data.
>> >
>> > Also, if I remember correctly, there's a configuration variable to set
>> > the maximum size, what happens if ZK reaches that?
>> >
>> > -- -- Mike
>> >
>> > Mike Schilli
>> > m@perlmeister.com
>> >
>>



-- 
Mahadev Konar
Hortonworks Inc.
http://hortonworks.com/

Re: ZooKeeper Memory Usage

Posted by César Álvarez Núñez <ce...@gmail.com>.
In my case, our stress test show up a linear increase of "tenured memory"
from 0 to > 3GiB with ZK 3.4.0 whereas the same stress-test with 3.3.3
keeps "tenured memory" stable and < 10MiB.

The stress test performs many zNodes creation and delete but the overall zk
usage at any moment in time was relative small.

BR,
/César.

On Thu, Feb 9, 2012 at 3:14 PM, Camille Fournier <ca...@apache.org> wrote:

> This is really a question about how the jvm grows its heaps and resizes
> them. If the jvm cannot allocate enough memory for the process because you
> didn't set the max memory high enough, it will fall over. Zookeeper keeps
> its entire state in memory for performance reasons, if it were to swap that
> would be quite bad for performance.
>
> C
> On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:
>
> > We've got a ZooKeeper instance that's using about 5 GB of resident
> > memory. Every time we restart it, it starts at 200MB, and then grows
> > slowly until it is back at 5 GB.
> >
> > The large footprint is related to how much data we've got in there.
> > What's interesting, though, is that the process size doesn't shrink if
> > we purge some of the data.
> >
> > Now, this isn't a big problem, I'm just curious if the process will fall
> > over at some point if it can't get more memory or if it'll just make due
> > by caching less data.
> >
> > Also, if I remember correctly, there's a configuration variable to set
> > the maximum size, what happens if ZK reaches that?
> >
> > -- -- Mike
> >
> > Mike Schilli
> > m@perlmeister.com
> >
>

Re: ZooKeeper Memory Usage

Posted by Camille Fournier <ca...@apache.org>.
This is really a question about how the jvm grows its heaps and resizes
them. If the jvm cannot allocate enough memory for the process because you
didn't set the max memory high enough, it will fall over. Zookeeper keeps
its entire state in memory for performance reasons, if it were to swap that
would be quite bad for performance.

C
On Feb 8, 2012 8:23 PM, "Mike Schilli" <m...@perlmeister.com> wrote:

> We've got a ZooKeeper instance that's using about 5 GB of resident
> memory. Every time we restart it, it starts at 200MB, and then grows
> slowly until it is back at 5 GB.
>
> The large footprint is related to how much data we've got in there.
> What's interesting, though, is that the process size doesn't shrink if
> we purge some of the data.
>
> Now, this isn't a big problem, I'm just curious if the process will fall
> over at some point if it can't get more memory or if it'll just make due
> by caching less data.
>
> Also, if I remember correctly, there's a configuration variable to set
> the maximum size, what happens if ZK reaches that?
>
> -- -- Mike
>
> Mike Schilli
> m@perlmeister.com
>