You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by "Benoit Tellier (Jira)" <se...@james.apache.org> on 2019/10/17 03:52:00 UTC
[jira] [Created] (JAMES-2925) Decreasing chunk_length_in_kb for
read heavy workloads
Benoit Tellier created JAMES-2925:
-------------------------------------
Summary: Decreasing chunk_length_in_kb for read heavy workloads
Key: JAMES-2925
URL: https://issues.apache.org/jira/browse/JAMES-2925
Project: James Server
Issue Type: Improvement
Components: cassandra, mailbox
Reporter: Benoit Tellier
James mostly serve primarily reads with a ration often other 80%.
We often benefit from read optimization.
Such an optimization is the size of chunks being LZ4 compressed within SSTables files:
- bigger chunks means better compression
- but also means all reads need to load the full chunk size, thus costing IOs
For read heavy workloads, experiment shows decreasing the chunk size from 64KB (default) to a more reasonable value (like 4KB) often leads to significant performance enhancement.
As an adoption step, we should conduct a performance test on read-heavy mailbox-metadata. If conclusive, we can consider adoption on some other often read metadata.
As a reference here is a reference article on the last pickle blog: https://thelastpickle.com/blog/2018/08/08/compression_performance.html
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org