You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Andrew Kyle Purtell (Jira)" <ji...@apache.org> on 2022/06/11 18:59:00 UTC

[jira] [Resolved] (HBASE-12324) Improve compaction speed and process for immutable short lived datasets

     [ https://issues.apache.org/jira/browse/HBASE-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Kyle Purtell resolved HBASE-12324.
-----------------------------------------
    Resolution: Abandoned

> Improve compaction speed and process for immutable short lived datasets
> -----------------------------------------------------------------------
>
>                 Key: HBASE-12324
>                 URL: https://issues.apache.org/jira/browse/HBASE-12324
>             Project: HBase
>          Issue Type: New Feature
>          Components: Compaction
>    Affects Versions: 0.98.0, 0.96.0
>            Reporter: Sheetal Dolas
>            Priority: Major
>         Attachments: OnlyDeleteExpiredFilesCompactionPolicy.java
>
>
> We have seen multiple cases where HBase is used to store immutable data and the data lives for short period of time (few days)
> On very high volume systems, major compactions become very costly and slowdown ingestion rates.
> In all such use cases (immutable data, high write rate and moderate read rates and shorter ttl), avoiding any compactions and just deleting old data brings lot of performance benefits.
> We should have a compaction policy that can only delete/archive files older than TTL and not compact any files.
> Also attaching a patch that can do so.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)