You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Pavel Yaskevich (Issue Comment Edited) (JIRA)" <ji...@apache.org> on 2012/04/12 18:09:21 UTC

[jira] [Issue Comment Edited] (CASSANDRA-4142) OOM Exception during repair session with LeveledCompactionStrategy

    [ https://issues.apache.org/jira/browse/CASSANDRA-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252523#comment-13252523 ] 

Pavel Yaskevich edited comment on CASSANDRA-4142 at 4/12/12 4:08 PM:
---------------------------------------------------------------------

bq. The comments in CRAR say that it can't use super.read, so is the RAR buffer wasted?

Buffer in CRAR used to read compressed data from disk (instead of creating separate buffer each time) and it uses RAR.buffer for decompression, so none of the buffers are wasted.
                
      was (Author: xedin):
    bq. The comments in CRAR say that it can't use super.read, so is the RAR buffer wasted?

Buffer in CRAR used to read compressed data from disk (instead of creating separate buffer each time) and it uses RAR.buffer for decompression, so non of the buffers is wasted.
                  
> OOM Exception during repair session with LeveledCompactionStrategy
> ------------------------------------------------------------------
>
>                 Key: CASSANDRA-4142
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4142
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>    Affects Versions: 1.0.6
>         Environment: OS: Linux CentOs 6 
> JDK: Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
> Node configuration:
> Quad-core
> 10 GB RAM
> Xmx set to 2,5 GB (as computed by default).
>            Reporter: Romain Hardouin
>
> We encountered an OOM Exception on 2 nodes during repair session.
> Our CF are set up to use LeveledCompactionStrategy and SnappyCompressor.
> These two options used together maybe the key to the problem.
> Despite of setting XX:+HeapDumpOnOutOfMemoryError, no dump have been generated.
> Nonetheless a memory analysis on a live node doing a repair reveals an hotspot: an ArrayList of SSTableBoundedScanner which appears to contain as many objects as there are SSTables on disk. 
> This ArrayList consumes 786 MB of the heap space for 5757 objects. Therefore each object is about 140 KB.
> Eclipse Memory Analyzer's denominator tree shows that 99% of a SSTableBoundedScanner object's memory is consumed by a CompressedRandomAccessReader which contains two big byte arrays.
> Cluster information:
> 9 nodes
> Each node handles 35 GB (RandomPartitioner)
> This JIRA was created following this discussion:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Why-so-many-SSTables-td7453033.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira