You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by sulong <su...@gmail.com> on 2013/07/01 08:00:17 UTC

Re: CompactionExecutor holds 8000+ SSTableReader 6G+ memory

These two fields:
CompressedRandomAccessReader.buffer
CompressedRandomAccessReader.compressed

in the queue SSTableReader.dfile.pool consumed those memory. I think the
SSTableReader.dfile is the cache of the SSTable file.


On Sat, Jun 29, 2013 at 1:09 PM, aaron morton <aa...@thelastpickle.com>wrote:

> Lots of memory are consumed by the SSTableReader's cache
>
>   The file cache is managed by the OS.
> However the SSTableReader will have bloom filters and compression meta
> data, both off heap in 1.2. The Key and Row caches are global so not
> associated with any one SStable.
>
> Cheers
>
> -----------------
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 28/06/2013, at 6:23 PM, sulong <su...@gmail.com> wrote:
>
> Total 100G data per node.
>
>
> On Fri, Jun 28, 2013 at 2:14 PM, sulong <su...@gmail.com> wrote:
>
>> aaron, thanks for your reply. Yes, I do use the Leveled compactions
>> strategy, and the SSTable size is 10M. If it happens again, I will try to
>> enlarge the sstable size.
>>
>> I just wonder why cassandra doesn't limit the SSTableReader's total
>> memory usage when compacting. Lots of memory are consumed by the
>> SSTableReader's cache. Why not clear these cache first at the beginning of
>> compaction?
>>
>>
>> On Fri, Jun 28, 2013 at 1:14 PM, aaron morton <aa...@thelastpickle.com>wrote:
>>
>>> Are you running the Levelled compactions strategy ?
>>> If so what is the max SSTable size and what is the total data per node?
>>>
>>>  If you are running it try using a larger SSTable size like 32MB
>>>
>>> Cheers
>>>
>>>    -----------------
>>> Aaron Morton
>>> Freelance Cassandra Consultant
>>> New Zealand
>>>
>>> @aaronmorton
>>> http://www.thelastpickle.com
>>>
>>> On 27/06/2013, at 2:02 PM, sulong <su...@gmail.com> wrote:
>>>
>>> According to  the OpsCenter records, yes,  the compaction was running
>>> then, 8.5mb /s
>>>
>>>
>>> On Thu, Jun 27, 2013 at 9:54 AM, sulong <su...@gmail.com> wrote:
>>>
>>>> version: 1.2.2
>>>> cluster read requests 800/s, write request 22/s
>>>> Sorrry, I don't know whether  the compaction was running then.
>>>>
>>>>
>>>> On Thu, Jun 27, 2013 at 1:02 AM, Robert Coli <rc...@eventbrite.com>wrote:
>>>>
>>>>> On Tue, Jun 25, 2013 at 10:13 PM, sulong <su...@gmail.com> wrote:
>>>>> > I have 4 nodes cassandra cluster. Every node has 32G memory, and the
>>>>> > cassandra jvm uses 8G. The cluster is suffering from gc. Looks like
>>>>> > CompactionExecutor thread holds too many SSTableReader. See the
>>>>> attachement.
>>>>>
>>>>> What version of Cassandra?
>>>>> What workload?
>>>>> Is compaction actually running?
>>>>>
>>>>> =Rob
>>>>>
>>>>
>>>>
>>>
>>>
>>
>
>