You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jun Rao (Commented) (JIRA)" <ji...@apache.org> on 2011/11/10 18:34:51 UTC
[jira] [Commented] (KAFKA-198) Avoid duplicated message during
consumer rebalance
[ https://issues.apache.org/jira/browse/KAFKA-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13147848#comment-13147848 ]
Jun Rao commented on KAFKA-198:
-------------------------------
This is a bit tricky to get right. The following is one possible design:
1. In ConsumerIterator, add a method clearCurrentChunk(). This method will clear the current chunk being iterated. This method has to by synchronized with makeNext().
2. In KafaMessageStream, add a method clear() which calls consumerIterator.clearCurrentChunk
3. In Fetcher, break initConnections() into 2 methods: stopConnections() and startConnections().
4. In ZookeeperConsumerConnector.updateFetcher. Do the following:
a. fetcher.stopConnections
b. for each new Fetcher to be created
b1. clear fetcher queue
b2. call KafkaMessageStream.clear
c. call commitOffsets
d. fetcher.startConnections
> Avoid duplicated message during consumer rebalance
> --------------------------------------------------
>
> Key: KAFKA-198
> URL: https://issues.apache.org/jira/browse/KAFKA-198
> Project: Kafka
> Issue Type: Improvement
> Affects Versions: 0.7
> Reporter: Jun Rao
>
> Currently, a consumer can get duplicated messages when a rebalance is triggered. It would be good if we can eliminate those duplicated messages.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira