You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jay Kreps (JIRA)" <ji...@apache.org> on 2012/10/04 21:41:47 UTC

[jira] [Created] (KAFKA-546) Fix commit() in zk consumer

Jay Kreps created KAFKA-546:
-------------------------------

             Summary: Fix commit() in zk consumer
                 Key: KAFKA-546
                 URL: https://issues.apache.org/jira/browse/KAFKA-546
             Project: Kafka
          Issue Type: New Feature
    Affects Versions: 0.8
            Reporter: Jay Kreps


In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.

In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.

This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Closed] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Jun Rao (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jun Rao closed KAFKA-546.
-------------------------

    
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>             Fix For: 0.8
>
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, kafka-546-v3.patch, kafka-546-v4.patch, kafka-546-v5.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Swapnil Ghike updated KAFKA-546:
--------------------------------

    Attachment: kafka-546-v5.patch

A small addition to the unit test to make sure that the iterator does not have any extra elements. 
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, kafka-546-v3.patch, kafka-546-v4.patch, kafka-546-v5.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Swapnil Ghike updated KAFKA-546:
--------------------------------

    Attachment: kafka-546-v4.patch

30. Fixed.

31. The test in v3 patch would've worked too, but changed it for clarity in this patch.
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, kafka-546-v3.patch, kafka-546-v4.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Swapnil Ghike updated KAFKA-546:
--------------------------------

    Attachment: kafka-546-v1.patch

1. ConsumerIterator skips messages that have already been fetched.

2. consumer.PartitionTopicInfo.enqueue()
- Changed a statement to use the starting offset of a messageSet instead of the current fetchedOffset. The current code expects the incoming offset to be the same as starting offset of the messageSet. But if a messageSet was partially consumed and fetched again, the fetchedOffset that goes into the FetchedDataChunk will be greater than the starting offset  in the messageSet. The fix takes care of this situation. The fix also does not disturb consumption under normal sequential fetches, and the behaviour will be the same as the current behaviour under normal sequential consumption.

3. Added a unit test to test de-deduplication of messages in ConsumerIterator.
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Jay Kreps (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jay Kreps updated KAFKA-546:
----------------------------

    Summary: Fix commit() in zk consumer for compressed messages  (was: Fix commit() in zk consumer)
    
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Swapnil Ghike updated KAFKA-546:
--------------------------------

    Attachment: kafka-546-v3.patch

Thanks for the comments. The fixes are as follows:

20. Changed ConsumerIterator to iterator until it reaches or passes ctiConsumerOffset.

21. Reverted this change because as discussed: 
i. On commit of a part of a compressed message, the fetchOffset that will be checkpointed will be the actual fetchOffset, and not the offset of the last message in the compressed message set.
ii. We need to keep fetchOffset to make sure that ShallowIterator works fine under normal conditions.

22.1 Remove zkConsumerConnector.
22.2 The comments in the test case should be helpful in this regard.
22.3 Changed the test to use deep iterator over compressed message set.

Other random changes:
1. Imports optimized over changes that were pulled in via rebase.
2. I had missed removing the calls to toInt() at a couple places in KAFKA-556 for FileMessageSet.sizeInBytes(). Fixing this.
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, kafka-546-v3.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Jun Rao (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489481#comment-13489481 ] 

Jun Rao commented on KAFKA-546:
-------------------------------

Can't see to apply the patch cleanly to 0.8. Could you rebase?

$ patch -p0 < ~/Downloads/kafka-546-v1.patch 
patching file core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala
patching file core/src/main/scala/kafka/message/ByteBufferMessageSet.scala
Reversed (or previously applied) patch detected!  Assume -R? [n] ^C

                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Assigned] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Swapnil Ghike reassigned KAFKA-546:
-----------------------------------

    Assignee: Swapnil Ghike
    
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Jun Rao (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492552#comment-13492552 ] 

Jun Rao commented on KAFKA-546:
-------------------------------

Thanks for patch v3. A couple of more comments:
30. ConsumerIterator.next(): In the following code, to be safe, we need to check if localCurrent hasNext in the while loop.
    // reject the messages that have already been consumed
    while (item.offset < currentTopicInfo.getConsumeOffset) {
      item = localCurrent.next()

31. ConsumerIteratorTest: Not sure if the test really does what it intends to. To simulate reading from the middle of a compressed messageset, we need to put in a consume offset larger than 0 in PartitionTopicInfo, right?
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, kafka-546-v3.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Comment Edited] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489191#comment-13489191 ] 

Swapnil Ghike edited comment on KAFKA-546 at 11/2/12 1:34 AM:
--------------------------------------------------------------

1. ConsumerIterator skips messages that have already been fetched.

2. consumer.PartitionTopicInfo.enqueue()
- Made a change to pass the starting offset of a messageSet instead of the current fetchedOffset to FetchedDataChunk(). The current code expects the fetchedOffset to be the same as the starting offset of the incoming messageSet. But if a messageSet was partially consumed and fetched again, the fetchedOffset that goes into the FetchedDataChunk will be greater than the starting offset in the incoming messageSet. The fix takes care of this situation. The fix also does not disturb consumption under normal sequential fetches, because in this situation the starting offset of incoming messageSet will actually be the same as the fetchedOffset recorded in partitionTopicInfo.

3. Added a unit test to test de-deduplication of messages in ConsumerIterator.
                
      was (Author: swapnilghike):
    1. ConsumerIterator skips messages that have already been fetched.

2. consumer.PartitionTopicInfo.enqueue()
- Changed a statement to use the starting offset of a messageSet instead of the current fetchedOffset. The current code expects the incoming offset to be the same as starting offset of the messageSet. But if a messageSet was partially consumed and fetched again, the fetchedOffset that goes into the FetchedDataChunk will be greater than the starting offset  in the messageSet. The fix takes care of this situation. The fix also does not disturb consumption under normal sequential fetches, and the behaviour will be the same as the current behaviour under normal sequential consumption.

3. Added a unit test to test de-deduplication of messages in ConsumerIterator.
                  
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Jun Rao (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13490977#comment-13490977 ] 

Jun Rao commented on KAFKA-546:
-------------------------------

Thanks for patch v2. Some comments:

20. ConsumerIterator.next(): The following code depends on no gaps in offsets. This is true at this moment, but may not be true in the future when we have a different retention policy. A safer way is to keep iterating the messageSet until we get an offset that reaches or passes ctiConsumeOffset.
        for (i <- 0L until (ctiConsumeOffset - cdcFetchOffset)) {
          localCurrent.next()

21. PartitionTopicInfo: In startOffset(), unfortunately, we can't use the shallow iterator. This is because when messages are compressed, the offset of the top level message has the offset of the last message (instead of the first one) in the compressed unit. Also, iterating messages here may not be ideal since it forces us to decompress. An alternative way is to do the logic in ConsumerIterator.next(). Everytime that we get a new chunk of messageset, we keep iterating it until the message offset reaches or passes the consumeroffset. This way, if we are doing shallow iteration, we don't have to decompress messages.

22. ConsumerIteratorTest: 
22.1 zkConsumerConnector is not used.
22.2 We probably should set consumerOffset to a value >0 in PartitionTopicInfo.
22.3 Also, could we add a test that covers compressed messageset?

                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Resolved] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Jun Rao (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jun Rao resolved KAFKA-546.
---------------------------

       Resolution: Fixed
    Fix Version/s: 0.8

Thanks for patch v5. +1. Committed to 0.8.
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>             Fix For: 0.8
>
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch, kafka-546-v3.patch, kafka-546-v4.patch, kafka-546-v5.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (KAFKA-546) Fix commit() in zk consumer for compressed messages

Posted by "Swapnil Ghike (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/KAFKA-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Swapnil Ghike updated KAFKA-546:
--------------------------------

    Attachment: kafka-546-v2.patch

Rebased.
                
> Fix commit() in zk consumer for compressed messages
> ---------------------------------------------------
>
>                 Key: KAFKA-546
>                 URL: https://issues.apache.org/jira/browse/KAFKA-546
>             Project: Kafka
>          Issue Type: New Feature
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Swapnil Ghike
>         Attachments: kafka-546-v1.patch, kafka-546-v2.patch
>
>
> In 0.7.x and earlier versions offsets were assigned by the byte location in the file. Because it wasn't possible to directly decompress from the middle of a compressed block, messages inside a compressed message set effectively had no offset. As a result the offset given to the consumer was always the offset of the wrapper message set.
> In 0.8 after the logical offsets patch messages in a compressed set do have offsets. However the server still needs to fetch from the beginning of the compressed messageset (otherwise it can't be decompressed). As a result a commit() which occurs in the middle of a message set will still result in some duplicates.
> This can be fixed in the ConsumerIterator by discarding messages smaller than the fetch offset rather than giving them to the consumer. This will make commit work correctly in the presence of compressed messages (finally).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira