You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Branimir Lambov (JIRA)" <ji...@apache.org> on 2017/01/18 13:58:26 UTC

[jira] [Comment Edited] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

    [ https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15828104#comment-15828104 ] 

Branimir Lambov edited comment on CASSANDRA-10520 at 1/18/17 1:57 PM:
----------------------------------------------------------------------

Attached microbenchmark


was (Author: blambov):
Microbenchmark

> Compressed writer and reader should support non-compressed data.
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-10520
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Local Write-Read Paths
>            Reporter: Branimir Lambov
>            Assignee: Branimir Lambov
>              Labels: messaging-service-bump-required
>             Fix For: 4.x
>
>         Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables during stress-tests, results in chunks larger than 64k which are a problem for the buffer pooling mechanisms employed by the {{CompressedRandomAccessReader}}. This results in non-negligible performance issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it does not provide benefits, I think we should allow compressed files to store uncompressed chunks as alternative to compressed data. Such a chunk could be written after compression returns a buffer larger than, for example, 90% of the input, and would not result in additional delays in writing. On reads it could be recognized by size (using a single global threshold constant in the compression metadata) and data could be directly transferred into the decompressed buffer, skipping the decompression step and ensuring a 64k buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)