You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Mark Rose <ma...@markrose.ca> on 2017/04/03 21:08:03 UTC

Re: Maximum memory usage reached in cassandra!

You may have better luck switching to G1GC and using a much larger
heap (16 to 30GB). 4GB is likely too small for your amount of data,
especially if you have a lot of sstables. Then try increasing
file_cache_size_in_mb further.

Cheers,
Mark

On Tue, Mar 28, 2017 at 3:01 AM, Mokkapati, Bhargav (Nokia -
IN/Chennai) <bh...@nokia.com> wrote:
> Hi Cassandra users,
>
>
>
> I am getting “Maximum memory usage reached (536870912 bytes), cannot
> allocate chunk of 1048576 bytes” . As a remedy I have changed the off heap
> memory usage limit cap i.e file_cache_size_in_mb parameter in cassandra.yaml
> from 512 to 1024.
>
>
>
> But now again the increased limit got filled up and throwing a message
> “Maximum memory usage reached (1073741824 bytes), cannot allocate chunk of
> 1048576 bytes”
>
>
>
> This issue occurring when redistribution of index’s happening ,due to this
> Cassandra nodes are still UP but read requests are getting failed from
> application side.
>
>
>
> My configuration details are as below:
>
>
>
> 5 node cluster , each node with 68 disks, each disk is 3.7 TB
>
>
>
> Total CPU cores - 8
>
>
>
> total  Mem:    377G
>
> used      265G
>
> free       58G
>
> shared  378M
>
> buff/cache 53G
>
> available 104G
>
>
>
> MAX_HEAP_SIZE is 4GB
>
> file_cache_size_in_mb: 1024
>
>
>
> memtable heap space is commented in yaml file as below:
>
> # memtable_heap_space_in_mb: 2048
>
> # memtable_offheap_space_in_mb: 2048
>
>
>
> Can anyone please suggest the solution for this issue. Thanks in advance !
>
>
>
> Thanks,
>
> Bhargav M
>
>
>
>
>
>
>
>