You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Pavel Yaskevich (JIRA)" <ji...@apache.org> on 2016/03/24 08:59:25 UTC

[jira] [Comment Edited] (CASSANDRA-11383) Avoid index segment stitching in RAM which lead to OOM on big SSTable files

    [ https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15209928#comment-15209928 ] 

Pavel Yaskevich edited comment on CASSANDRA-11383 at 3/24/16 7:58 AM:
----------------------------------------------------------------------

[~doanduyhai] As an update, all of the changes are [complete against 3.5 branch|https://github.com/xedin/cassandra/tree/CASSANDRA-11383] and testall/dtest are currently running. Patchset consists of changes to SPARSE mode to fail if it detects that it's property is not satisfied (a single term is attached to more than N keys). TokenTreeBuilder has been split into Dynamic and Static, the latter is used in the index segment stitching phase which makes it O( 1 ) memory overhead because building a TokenTree is streaming instead of pre-caching as before.

We have tested everything against the 26GB file I got from you but you are also welcome to try the code out before I merge everything.

||branch||testall||dtest||
|[CASSANDRA-11383|https://github.com/xedin/cassandra/tree/CASSANDRA-11383]|[testall|http://cassci.datastax.com/job/xedin-CASSANDRA-11383-testall/]|[dtest|http://cassci.datastax.com/job/xedin-CASSANDRA-11383-dtest/]|



was (Author: xedin):
[~doanduyhai] As an update, all of the changes are [complete against 3.5 branch|https://github.com/xedin/cassandra/tree/CASSANDRA-11383] and testall/dtest are currently running. Patchset consists of changes to SPARSE mode to fail if it detects that it's property is not satisfied (a single term is attached to more than N keys). TokenTreeBuilder has been split into Dynamic and Static, the latter is used in the index segment stitching phase which makes it O( 1 ) memory overhead because building a TokenTree is streaming instead of pre-caching as before.

We have tested everything against the 26GB file I got from you but you are also welcome to try the code out before I merge everything.

> Avoid index segment stitching in RAM which lead to OOM on big SSTable files 
> ----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-11383
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
>             Project: Cassandra
>          Issue Type: Bug
>          Components: CQL
>         Environment: C* 3.4
>            Reporter: DOAN DuyHai
>            Assignee: Jordan West
>              Labels: sasi
>             Fix For: 3.5
>
>         Attachments: CASSANDRA-11383.patch, SASI_Index_build_LCS_1G_Max_SSTable_Size_logs.tar.gz, new_system_log_CMS_8GB_OOM.log, system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments are flush to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)