You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jason Gustafson (JIRA)" <ji...@apache.org> on 2017/05/24 19:15:04 UTC

[jira] [Created] (KAFKA-5321) MemoryRecords.filterTo can return corrupt data if output buffer is not large enough

Jason Gustafson created KAFKA-5321:
--------------------------------------

             Summary: MemoryRecords.filterTo can return corrupt data if output buffer is not large enough
                 Key: KAFKA-5321
                 URL: https://issues.apache.org/jira/browse/KAFKA-5321
             Project: Kafka
          Issue Type: Bug
          Components: log
            Reporter: Jason Gustafson
            Assignee: Jason Gustafson
            Priority: Blocker
             Fix For: 0.11.0.0


Due to KAFKA-5316, it is possible for a record set to grow during cleaning and overflow the output buffer allocated for writing. When we reach the record set which is doomed to overflow the buffer, there are two possibilities:

1. No records were removed and the original entry is directly appended to the log. This results in the overflow reported in KAFKA-5316.
2. Records were removed and a new record set is built. 

Here we are concerned with the latter case.The problem is that the builder code automatically allocates a new buffer when we reach the end of the existing buffer and does not reset the position in the original buffer. Since {{MemoryRecords.filterTo}} continues using the old buffer, this can lead to data corruption after cleaning (the data left in the overflowed buffer is garbage). 

Note that this issue could get fixed as part of a general solution KAFKA-5316, but if that seems too risk, we might fix this separately. A simple solution is to make both paths consistent and ensure that we raise an exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)