You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by "Benoit Tellier (Jira)" <se...@james.apache.org> on 2020/03/19 02:11:00 UTC

[jira] [Comment Edited] (JAMES-3121) Tune Cassandra Tables

    [ https://issues.apache.org/jira/browse/JAMES-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062232#comment-17062232 ] 

Benoit Tellier edited comment on JAMES-3121 at 3/19/20, 2:10 AM:
-----------------------------------------------------------------

I did a similar analysis on a deployment at linagora here are the conclusion I came to:

**Table Preconisation**:
 - Adopt SizeTired compaction strategies for the following tables:
   - applicableflag
   - attachmentmessageid
   - attachmentv2
   - firstunseen
   - mailboxcounters
   - mailboxrecents
   - messagecounter
   - messagedeleted
   - modseq
   - imapuidtable
   - messageidtable
   - system.paxos

 - Adopt leveled startegy for the following table: eventstore

 - Increase boom filter FP chance
   - attachmentowners
   - deletedmailsv2
   - firstunseen
   - mailboxcounters
   - messagecounter

-------------------------------

```
Table: paxos
Compaction choice: Leveled
Space used (live): 435780382
Space used (total): 435780382


Table: acl
Compaction choice: SizeTired
Local read count: 59841252
Local write count: 2
Bloom filter false ratio: 0.00000

Table: applicableflag
Compaction choice: Leveled
Local read count: 527407
Local write count: 3978528
Bloom filter false ratio: 0.00037
Preconisation:
 - Switch compaction to SizeTired

Table: attachmentmessageid
Compaction choice: Leveled
Local read count: 3591
Local write count: 17375490
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to SizeTired

Table: attachmentowners
Compaction choice: Leveled
Local read count: 3723
Local write count: 69
Bloom filter false ratio: 1.00000
Preconisation:
 - Increase boom filter FP chance

Table: attachmentv2
Compaction choice: Leveled
Local read count: 18032107
Local write count: 17380539
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to SizeTired (no updates, uid, not partition)

Table: browsestart
Compaction choice: SizeTired
Local read count: 136039
Local write count: 0
Bloom filter false ratio: 0.00000

Table: currentquota (counters)
Compaction choice: SizeTired
Local read count: 31947239
Local write count: 16025449
Bloom filter false ratio: 0.00024

Table: deletedmailsv2
Compaction choice: SizeTired
Local read count: 16995797
Local write count: 48510
Bloom filter false ratio: 0.50000
Preconisation:
 - Increase boom filter FP chance

Table: domains
Compaction choice: SizeTired
Local read count: 977712
Local write count: 6
Bloom filter false ratio: 0.00000

Table: enqueuedmailsv3
Compaction choice: Date tired
Local read count: 360786016
Local write count: 48571
Bloom filter false ratio: 0.00000

Table: event_dead_letters
Compaction choice: SizeTired
Local read count: 9237
Local write count: 532544
Bloom filter false ratio: 0.00000

Table: eventstore
Compaction choice: SizeTired
Local read count: 36353
Local write count: 62
Preconisation:
 - Switch compaction to Leveled compaction strategy

Table: firstunseen
Compaction choice: Leveled
Local read count: 532429
Local write count: 3404636
Bloom filter false ratio: 0.97986
Preconisation:
 - Switch compaction to Size tired compaction strategy
 - Increase boom filter FP chance

Table: imapuidtable
Compaction choice: Leveled
Local read count: 10529280
Local write count: 18105757
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: mailbox
Compaction choice: SizeTired
Local read count: 67527587
Local write count: 55458
Bloom filter false ratio: 0.00135

Table: mailboxcounters
Compaction choice: Leveled
Local read count: 5570883
Local write count: 20250995
Bloom filter false ratio: 0.67268
Preconisation:
 - Switch compaction to Size tired compaction strategy
 - Increase boom filter FP chance

Table: mailboxpathv2
Compaction choice: SizeTired
Local read count: 23379709
Local write count: 55742
Bloom filter false ratio: 0.00010

Table: mailboxrecents
Compaction choice: Leveled
Local read count: 1100031
Local write count: 1095795
Bloom filter false ratio: 0.00000
TUMBSTONE issues
Preconisation:
 - Purge tumbstones: Run a compaction wit a low gc_grace_period on each node, then a repair, then restore gc_grace_period
 - Switch compaction to Size tired compaction strategy

Table: mappings_sources
Compaction choice: SizeTired
Local read count: 3247176
Local write count: 20057
Bloom filter false ratio: 0.00325

Table: message_fast_view_projection
Compaction choice: SizeTired
Local read count: 384188
Local write count: 15659145
Bloom filter false ratio: 0.00000

Table: messagecounter
Compaction choice: Leveled
Local read count: 18953666
Local write count: 15916744
Preconisation:
 - Switch compaction to Size tired compaction strategy
 - Increase boom filter FP chance

Table: messagedeleted
Compaction choice: Leveled
Local read count: 157181
Local write count: 2144582
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: messageidtable
Compaction choice: Leveled
Local read count: 17861281
Local write count: 18101731
Bloom filter false ratio: 0.00006
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: messagev2
Compaction choice: SizeTired
Local read count: 170667816
Local write count: 15904503
Bloom filter false ratio: 0.00002
Local read latency: 0.251 ms -> too slow???

Table: modseq
Compaction choice: Leveled
Local read count: 20853903
Local write count: 17717124
Bloom filter false ratio: 0.00139
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: rrt
Compaction choice: SizeTired
Local read count: 975957
Local write count: 20058
Bloom filter false ratio: 0.00002

Table: subscription
Compaction choice: SizeTired
Local read count: 110384
Local write count: 48779
Bloom filter false ratio: 0.00000
```


was (Author: btellier):
I did a similar analysis on a deployment at linagora here are the conclusion I came to:

**Table Preconisation**:
 - Adopt SizeTired compaction strategies for the following tables:
   - applicableflag
   - attachmentmessageid
   - attachmentv2
   - firstunseen
   - mailboxcounters
   - mailboxrecents
   - messagecounter
   - messagedeleted
   - modseq
   - imapuidtable
   - messageidtable
   - system.paxos
 - Adopt leveled startegy for the following table: eventstore
 - Increase boom filter FP chance
   - attachmentowners
   - deletedmailsv2
   - firstunseen
   - mailboxcounters
   - messagecounter

-------------------------------

```
Table: paxos
Compaction choice: Leveled
Space used (live): 435780382
Space used (total): 435780382


Table: acl
Compaction choice: SizeTired
Local read count: 59841252
Local write count: 2
Bloom filter false ratio: 0.00000

Table: applicableflag
Compaction choice: Leveled
Local read count: 527407
Local write count: 3978528
Bloom filter false ratio: 0.00037
Preconisation:
 - Switch compaction to SizeTired

Table: attachmentmessageid
Compaction choice: Leveled
Local read count: 3591
Local write count: 17375490
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to SizeTired

Table: attachmentowners
Compaction choice: Leveled
Local read count: 3723
Local write count: 69
Bloom filter false ratio: 1.00000
Preconisation:
 - Increase boom filter FP chance

Table: attachmentv2
Compaction choice: Leveled
Local read count: 18032107
Local write count: 17380539
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to SizeTired (no updates, uid, not partition)

Table: browsestart
Compaction choice: SizeTired
Local read count: 136039
Local write count: 0
Bloom filter false ratio: 0.00000

Table: currentquota (counters)
Compaction choice: SizeTired
Local read count: 31947239
Local write count: 16025449
Bloom filter false ratio: 0.00024

Table: deletedmailsv2
Compaction choice: SizeTired
Local read count: 16995797
Local write count: 48510
Bloom filter false ratio: 0.50000
Preconisation:
 - Increase boom filter FP chance

Table: domains
Compaction choice: SizeTired
Local read count: 977712
Local write count: 6
Bloom filter false ratio: 0.00000

Table: enqueuedmailsv3
Compaction choice: Date tired
Local read count: 360786016
Local write count: 48571
Bloom filter false ratio: 0.00000

Table: event_dead_letters
Compaction choice: SizeTired
Local read count: 9237
Local write count: 532544
Bloom filter false ratio: 0.00000

Table: eventstore
Compaction choice: SizeTired
Local read count: 36353
Local write count: 62
Preconisation:
 - Switch compaction to Leveled compaction strategy

Table: firstunseen
Compaction choice: Leveled
Local read count: 532429
Local write count: 3404636
Bloom filter false ratio: 0.97986
Preconisation:
 - Switch compaction to Size tired compaction strategy
 - Increase boom filter FP chance

Table: imapuidtable
Compaction choice: Leveled
Local read count: 10529280
Local write count: 18105757
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: mailbox
Compaction choice: SizeTired
Local read count: 67527587
Local write count: 55458
Bloom filter false ratio: 0.00135

Table: mailboxcounters
Compaction choice: Leveled
Local read count: 5570883
Local write count: 20250995
Bloom filter false ratio: 0.67268
Preconisation:
 - Switch compaction to Size tired compaction strategy
 - Increase boom filter FP chance

Table: mailboxpathv2
Compaction choice: SizeTired
Local read count: 23379709
Local write count: 55742
Bloom filter false ratio: 0.00010

Table: mailboxrecents
Compaction choice: Leveled
Local read count: 1100031
Local write count: 1095795
Bloom filter false ratio: 0.00000
TUMBSTONE issues
Preconisation:
 - Purge tumbstones: Run a compaction wit a low gc_grace_period on each node, then a repair, then restore gc_grace_period
 - Switch compaction to Size tired compaction strategy

Table: mappings_sources
Compaction choice: SizeTired
Local read count: 3247176
Local write count: 20057
Bloom filter false ratio: 0.00325

Table: message_fast_view_projection
Compaction choice: SizeTired
Local read count: 384188
Local write count: 15659145
Bloom filter false ratio: 0.00000

Table: messagecounter
Compaction choice: Leveled
Local read count: 18953666
Local write count: 15916744
Preconisation:
 - Switch compaction to Size tired compaction strategy
 - Increase boom filter FP chance

Table: messagedeleted
Compaction choice: Leveled
Local read count: 157181
Local write count: 2144582
Bloom filter false ratio: 0.00000
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: messageidtable
Compaction choice: Leveled
Local read count: 17861281
Local write count: 18101731
Bloom filter false ratio: 0.00006
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: messagev2
Compaction choice: SizeTired
Local read count: 170667816
Local write count: 15904503
Bloom filter false ratio: 0.00002
Local read latency: 0.251 ms -> too slow???

Table: modseq
Compaction choice: Leveled
Local read count: 20853903
Local write count: 17717124
Bloom filter false ratio: 0.00139
Preconisation:
 - Switch compaction to Size tired compaction strategy

Table: rrt
Compaction choice: SizeTired
Local read count: 975957
Local write count: 20058
Bloom filter false ratio: 0.00002

Table: subscription
Compaction choice: SizeTired
Local read count: 110384
Local write count: 48779
Bloom filter false ratio: 0.00000
```

> Tune Cassandra Tables
> ---------------------
>
>                 Key: JAMES-3121
>                 URL: https://issues.apache.org/jira/browse/JAMES-3121
>             Project: James Server
>          Issue Type: Improvement
>          Components: cassandra
>            Reporter: Gautier DI FOLCO
>            Priority: Minor
>
> Our usages of Cassandra differ from table to table, different Compaction strategies and read_repair_chance can be applied in order to comply with Cassandra optimal behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org