You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Mohamed Aashif (Jira)" <ji...@apache.org> on 2021/07/14 09:41:00 UTC

[jira] [Updated] (KAFKA-13084) Offsets clean up based on largest Timestamp in a Log segment

     [ https://issues.apache.org/jira/browse/KAFKA-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mohamed Aashif updated KAFKA-13084:
-----------------------------------
    Description: /~Duplicated~  (was: This is to confirm the behaviour of [retention.ms|https://kafka.apache.org/documentation/#topicconfigs_retention.ms].

On observation, a log segment is kept undeleted until the _*Largest timestamp*_ of the offsets in that segment.

Initially, we thought that the _*Largest timestamp*  **_ is the considered from the created time of the record. But, it is considered from the timestamp of the _ProducerRecord,_ which can be entered manually.

We suspect if this is intentional, as the log segment can be undeleted forever if the timestamp of a far future time is given.


Please clarify.[|https://kafka.apache.org/documentation/#topicconfigs_retention.ms])

> Offsets clean up based on largest Timestamp in a Log segment
> ------------------------------------------------------------
>
>                 Key: KAFKA-13084
>                 URL: https://issues.apache.org/jira/browse/KAFKA-13084
>             Project: Kafka
>          Issue Type: Bug
>          Components: config
>            Reporter: Mohamed Aashif
>            Priority: Major
>
> /~Duplicated~



--
This message was sent by Atlassian Jira
(v8.3.4#803005)