You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Jiangjie Qin (JIRA)" <ji...@apache.org> on 2017/11/23 00:31:00 UTC
[jira] [Created] (KAFKA-6264) Log cleaner thread may die on legacy
segment containing messages whose offsets are too large
Jiangjie Qin created KAFKA-6264:
-----------------------------------
Summary: Log cleaner thread may die on legacy segment containing messages whose offsets are too large
Key: KAFKA-6264
URL: https://issues.apache.org/jira/browse/KAFKA-6264
Project: Kafka
Issue Type: Bug
Components: core
Affects Versions: 0.11.0.2, 1.0.0, 0.10.2.1
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
Priority: Critical
Fix For: 1.0.1
We encountered a problem that some of the legacy log segments contains messages whose offsets are larger than {{SegmentBaseOffset + Int.MaxValue}}.
Prior to 0.10.2.0, we do not assert the offset of the messages when appending them to the log segments. Due to KAFKA-5413, the log cleaner may append messages whose offset is greater than {{base_offset + Int.MaxValue}} into the segment during the log compaction.
After the brokers are upgraded, those log segments cannot be compacted anymore because the compaction will fail immediately due to the offset range assertion we added to the LogSegment.
We have seen this issue in the __consumer_offsets topic so it could be a general problem. There is no easy solution for the users to recover from this case.
One solution is to split such log segments in the log cleaner once it sees a message with problematic offset and append those messages to a separate log segment with a larger base_offset.
Due to the impact of the issue. We may want to consider backporting the fix to previous affected versions.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)