You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jason Harvey <al...@gmail.com> on 2011/04/02 05:21:05 UTC

Bizarre side-effect of increasing read concurrency

After increasing read concurrency from 8 to 64, GC mark-and-sweep was
suddenly able to reclaim much more memory than it previously did.

Previously, mark-and-sweep would run around 5.5GB, and would cut heap
usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
down to 2GB used. This behavior was consistent in every machine I
increased read-concurrent on.

Any thoughts on why this behavior changed? No other diagnostics
appeared to correlate to the concurrency change, besides thread count.

Re: Bizarre side-effect of increasing read concurrency

Posted by Jason Harvey <al...@gmail.com>.
Ah, that would probably explain it. Thanks!

On Apr 1, 8:49 pm, Edward Capriolo <ed...@gmail.com> wrote:
> On Fri, Apr 1, 2011 at 11:27 PM, Jason Harvey <al...@gmail.com> wrote:
> > On further analysis, it looks like this behavior occurs when a node is
> > simply restarted. Is that normal behavior? If mark-and-sweep becomes
> > less and less effective over time, does that suggest an issue with GC,
> > or an issue with memory use?
>
> > On Apr 1, 8:21 pm, Jason Harvey <al...@gmail.com> wrote:
> >> After increasing read concurrency from 8 to 64, GC mark-and-sweep was
> >> suddenly able to reclaim much more memory than it previously did.
>
> >> Previously, mark-and-sweep would run around 5.5GB, and would cut heap
> >> usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
> >> down to 2GB used. This behavior was consistent in every machine I
> >> increased read-concurrent on.
>
> >> Any thoughts on why this behavior changed? No other diagnostics
> >> appeared to correlate to the concurrency change, besides thread count.
>
> Jason,
>
> First you do not need to restart to adjust concurrent readers. It can
> be done from JMX without restart.
>
> As for the memory, after you restart you may have drained your caches
> and memtables which explains why less memory is used.
>
> Java also enjoys using all the memory your allocate and the Garbage
> collection does not give it back unless it needs to.
>
> Edward

Re: Bizarre side-effect of increasing read concurrency

Posted by Peter Schuller <pe...@infidyne.com>.
> Java also enjoys using all the memory your allocate and the Garbage
> collection does not give it back unless it needs to.

This only explains why it never shrinks in top, not increased heap
usage (which is presumably the memtables/key/row caches already
mentioned).

-- 
/ Peter Schuller

Re: Bizarre side-effect of increasing read concurrency

Posted by Edward Capriolo <ed...@gmail.com>.
On Fri, Apr 1, 2011 at 11:27 PM, Jason Harvey <al...@gmail.com> wrote:
> On further analysis, it looks like this behavior occurs when a node is
> simply restarted. Is that normal behavior? If mark-and-sweep becomes
> less and less effective over time, does that suggest an issue with GC,
> or an issue with memory use?
>
> On Apr 1, 8:21 pm, Jason Harvey <al...@gmail.com> wrote:
>> After increasing read concurrency from 8 to 64, GC mark-and-sweep was
>> suddenly able to reclaim much more memory than it previously did.
>>
>> Previously, mark-and-sweep would run around 5.5GB, and would cut heap
>> usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
>> down to 2GB used. This behavior was consistent in every machine I
>> increased read-concurrent on.
>>
>> Any thoughts on why this behavior changed? No other diagnostics
>> appeared to correlate to the concurrency change, besides thread count.
>

Jason,

First you do not need to restart to adjust concurrent readers. It can
be done from JMX without restart.

As for the memory, after you restart you may have drained your caches
and memtables which explains why less memory is used.

Java also enjoys using all the memory your allocate and the Garbage
collection does not give it back unless it needs to.

Edward

Re: Bizarre side-effect of increasing read concurrency

Posted by Jason Harvey <al...@gmail.com>.
On further analysis, it looks like this behavior occurs when a node is
simply restarted. Is that normal behavior? If mark-and-sweep becomes
less and less effective over time, does that suggest an issue with GC,
or an issue with memory use?

On Apr 1, 8:21 pm, Jason Harvey <al...@gmail.com> wrote:
> After increasing read concurrency from 8 to 64, GC mark-and-sweep was
> suddenly able to reclaim much more memory than it previously did.
>
> Previously, mark-and-sweep would run around 5.5GB, and would cut heap
> usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
> down to 2GB used. This behavior was consistent in every machine I
> increased read-concurrent on.
>
> Any thoughts on why this behavior changed? No other diagnostics
> appeared to correlate to the concurrency change, besides thread count.

Re: Bizarre side-effect of increasing read concurrency

Posted by Peter Schuller <pe...@infidyne.com>.
> My Xmx and Xms are both 7.5GB. However, I never see the heap usage
> reach past 5.5. Think it is still a good idea to increase the heap?

Not necessarily. I thought you had a max heap of 5.5, in which case a
live set of 4 gb after a completed cms pass seemed pretty high.  Seems
more reasonable if max heap is 7.5 gig.

-- 
/ Peter Schuller

Re: Bizarre side-effect of increasing read concurrency

Posted by Jason Harvey <al...@gmail.com>.
My Xmx and Xms are both 7.5GB. However, I never see the heap usage
reach past 5.5. Think it is still a good idea to increase the heap?

Thanks,
Jason

On Apr 2, 2:45 am, Peter Schuller <pe...@infidyne.com> wrote:
> > Previously, mark-and-sweep would run around 5.5GB, and would cut heap
> > usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
> > down to 2GB used. This behavior was consistent in every machine I
> > increased read-concurrent on.
>
> So each full CMS cycles brings it down to 4 on a maximum heap size of
> 5.5GB? Or are you using VM options such that Xms != Xmx?
>
> If 5.5 is the maximum heap size and you're at 4 gig after the
> completion of a mark-and-sweep, I'd say it would be a good idea to
> pre-emptively up the heap size a bit (or decreases memtables/key/row
> caches).
>
> --
> / Peter Schuller

Re: Bizarre side-effect of increasing read concurrency

Posted by Peter Schuller <pe...@infidyne.com>.
> Previously, mark-and-sweep would run around 5.5GB, and would cut heap
> usage to 4GB. Now, it still runs at 5.5GB, but it shrinks all the way
> down to 2GB used. This behavior was consistent in every machine I
> increased read-concurrent on.

So each full CMS cycles brings it down to 4 on a maximum heap size of
5.5GB? Or are you using VM options such that Xms != Xmx?

If 5.5 is the maximum heap size and you're at 4 gig after the
completion of a mark-and-sweep, I'd say it would be a good idea to
pre-emptively up the heap size a bit (or decreases memtables/key/row
caches).


-- 
/ Peter Schuller