You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by learner dba <ca...@yahoo.com.INVALID> on 2018/06/05 13:54:23 UTC

How to identify which table causing Maximum Memory usage limit

Hi,
We see this message often, cluster has multiple keyspaces and column families; How do I know which CF is causing this? Or it could be something else?Do we need to worry about this message?

INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983 NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB


Re: How to identify which table causing Maximum Memory usage limit

Posted by Nitan Kainth <ni...@gmail.com>.
Sorry, I didn't mean to high jack the thread. But I have seen similar
issues and ignore it always because it wasn't really causing any issues.
But I am really curious on how to find these.

On Mon, Jun 11, 2018 at 9:45 AM, Nitan Kainth <ni...@gmail.com> wrote:

> thanks Martin.
>
> 99 percentile of all tables are even size. Max is always higher in all
> tables.
>
> The question is, How do I identify, which table is throwing this "Maximum
> memory usage reached (512.000MiB)" usage message?
>
> On Mon, Jun 11, 2018 at 5:59 AM, Martin Mačura <m....@gmail.com> wrote:
>
>> Hi,
>> we've had this issue with large partitions (100 MB and more).  Use
>> nodetool tablehistograms to find partition sizes for each table.
>>
>> If you have enough heap space to spare, try increasing this parameter:
>> file_cache_size_in_mb: 512
>>
>> There's also the following parameter, but I did not test the impact yet:
>> buffer_pool_use_heap_if_exhausted: true
>>
>>
>> Regards,
>>
>> Martin
>>
>>
>> On Tue, Jun 5, 2018 at 3:54 PM, learner dba
>> <ca...@yahoo.com.invalid> wrote:
>> > Hi,
>> >
>> > We see this message often, cluster has multiple keyspaces and column
>> > families;
>> > How do I know which CF is causing this?
>> > Or it could be something else?
>> > Do we need to worry about this message?
>> >
>> > INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983
>> NoSpamLogger.java:91
>> > - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
>> > 1.000MiB
>> >
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
>> For additional commands, e-mail: user-help@cassandra.apache.org
>>
>>
>

Re: How to identify which table causing Maximum Memory usage limit

Posted by Nitan Kainth <ni...@gmail.com>.
thanks Martin.

99 percentile of all tables are even size. Max is always higher in all
tables.

The question is, How do I identify, which table is throwing this "Maximum
memory usage reached (512.000MiB)" usage message?

On Mon, Jun 11, 2018 at 5:59 AM, Martin Mačura <m....@gmail.com> wrote:

> Hi,
> we've had this issue with large partitions (100 MB and more).  Use
> nodetool tablehistograms to find partition sizes for each table.
>
> If you have enough heap space to spare, try increasing this parameter:
> file_cache_size_in_mb: 512
>
> There's also the following parameter, but I did not test the impact yet:
> buffer_pool_use_heap_if_exhausted: true
>
>
> Regards,
>
> Martin
>
>
> On Tue, Jun 5, 2018 at 3:54 PM, learner dba
> <ca...@yahoo.com.invalid> wrote:
> > Hi,
> >
> > We see this message often, cluster has multiple keyspaces and column
> > families;
> > How do I know which CF is causing this?
> > Or it could be something else?
> > Do we need to worry about this message?
> >
> > INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983
> NoSpamLogger.java:91
> > - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
> > 1.000MiB
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
> For additional commands, e-mail: user-help@cassandra.apache.org
>
>

Re: How to identify which table causing Maximum Memory usage limit

Posted by Martin Mačura <m....@gmail.com>.
Hi,
we've had this issue with large partitions (100 MB and more).  Use
nodetool tablehistograms to find partition sizes for each table.

If you have enough heap space to spare, try increasing this parameter:
file_cache_size_in_mb: 512

There's also the following parameter, but I did not test the impact yet:
buffer_pool_use_heap_if_exhausted: true


Regards,

Martin


On Tue, Jun 5, 2018 at 3:54 PM, learner dba
<ca...@yahoo.com.invalid> wrote:
> Hi,
>
> We see this message often, cluster has multiple keyspaces and column
> families;
> How do I know which CF is causing this?
> Or it could be something else?
> Do we need to worry about this message?
>
> INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983 NoSpamLogger.java:91
> - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
> 1.000MiB
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
For additional commands, e-mail: user-help@cassandra.apache.org