You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Mohamed Aashif (Jira)" <ji...@apache.org> on 2021/07/14 09:38:00 UTC
[jira] [Updated] (KAFKA-13085) Offsets clean up based on largest
Timestamp in a Log segment
[ https://issues.apache.org/jira/browse/KAFKA-13085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mohamed Aashif updated KAFKA-13085:
-----------------------------------
Component/s: (was: config)
> Offsets clean up based on largest Timestamp in a Log segment
> ------------------------------------------------------------
>
> Key: KAFKA-13085
> URL: https://issues.apache.org/jira/browse/KAFKA-13085
> Project: Kafka
> Issue Type: Bug
> Reporter: Mohamed Aashif
> Priority: Major
>
> This is to confirm the behaviour of [retention.ms|https://kafka.apache.org/documentation/#topicconfigs_retention.ms].
> According to the kafka doc, a log segment is kept undeleted until the *_Largest timestamp_* of the offsets in that segment.
> Initially, we thought that the _*Largest timestamp*_ is the considered from the created time of the record. But, it is considered from the timestamp of the _ProducerRecord_, which can be entered manually.
> We suspect if this is intentional, as the log segment can be undeleted forever if the timestamp of a far future time is given.
> Please clarify.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)