You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jun Rao (JIRA)" <ji...@apache.org> on 2012/10/23 17:25:11 UTC

[jira] [Updated] (KAFKA-580) system test testcase_0122 under replication fails due to large # of data loss

     [ https://issues.apache.org/jira/browse/KAFKA-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jun Rao updated KAFKA-580:
--------------------------

    Attachment: kafka-580.patch

Attach a patch. The problem is that if we remove items from a set while iterating it, the behavior is not deterministic.Also fixed kafka-578 in the patch since it's touching the same code.
                
> system test testcase_0122 under replication fails due to large # of data loss
> -----------------------------------------------------------------------------
>
>                 Key: KAFKA-580
>                 URL: https://issues.apache.org/jira/browse/KAFKA-580
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8
>            Reporter: Jun Rao
>            Priority: Blocker
>              Labels: bugs
>         Attachments: kafka-580.patch
>
>
> testcase_0122 fails sometimes because a large # of messages is lost with ack = 1. In this case, we expect only a small number of messages to be lost when there are broker failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira