You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Richard Yu (Jira)" <ji...@apache.org> on 2019/10/24 00:01:11 UTC

[jira] [Comment Edited] (KAFKA-8522) Tombstones can survive forever

    [ https://issues.apache.org/jira/browse/KAFKA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958388#comment-16958388 ] 

Richard Yu edited comment on KAFKA-8522 at 10/23/19 11:59 PM:
--------------------------------------------------------------

Hi [~junrao] [~hachikuji] Just have a question involving implementation of the KIP.

Where is the base timestamp for the RecordBatch (batch header v2) assigned its value? I'm asking because I'm having a bit of trouble locating where the base timestamp's value is assigned. 


was (Author: yohan123):
Hi [~junrao] [~hachikuji] Just have a question involving implementation of the KIP.

Where is the base timestamp for the RecordBatch (batch header v2) defined in Kafka? I'm asking because I'm having a bit of trouble locating where the base timestamp's value is assigned. 

> Tombstones can survive forever
> ------------------------------
>
>                 Key: KAFKA-8522
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8522
>             Project: Kafka
>          Issue Type: Improvement
>          Components: log cleaner
>            Reporter: Evelyn Bayes
>            Priority: Minor
>
> This is a bit grey zone as to whether it's a "bug" but it is certainly unintended behaviour.
>  
> Under specific conditions tombstones effectively survive forever:
>  * Small amount of throughput;
>  * min.cleanable.dirty.ratio near or at 0; and
>  * Other parameters at default.
> What  happens is all the data continuously gets cycled into the oldest segment. Old records get compacted away, but the new records continuously update the timestamp of the oldest segment reseting the countdown for deleting tombstones.
> So tombstones build up in the oldest segment forever.
>  
> While you could "fix" this by reducing the segment size, this can be undesirable as a sudden change in throughput could cause a dangerous number of segments to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)