You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Rohit Shekhar (Jira)" <ji...@apache.org> on 2020/09/08 21:04:00 UTC
[jira] [Created] (KAFKA-10471) TimeIndex handling may cause data
loss in certain back to back failure
Rohit Shekhar created KAFKA-10471:
-------------------------------------
Summary: TimeIndex handling may cause data loss in certain back to back failure
Key: KAFKA-10471
URL: https://issues.apache.org/jira/browse/KAFKA-10471
Project: Kafka
Issue Type: Bug
Components: core, log
Reporter: Rohit Shekhar
# Active segment for log A going clean shutdown - trim the time index to the latest fill value, set the clean shutdown marker.
# Broker restarts, loading logs - no recovery due to clean shutdown marker, log A recovers with the previous active segment as current. It also resized the TimeIndex to the max.
# Before all the log loads, the broker had a hard shutdown causing a clean shutdown marker left as is.
# Broker restarts, log A skips recovery due to the presence of a clean shutdown marker but the TimeIndex file assumes the resized file from the previous instance is all full (it assumes either file is newly created or is full with valid value).
# The first append to the active segment will result in roll and TimeIndex will be rolled with the timestamp value of the last valid entry (0)
# Segment.the largest timestamp gives 0 (this can cause premature deletion of data due to retention.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)