You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by "Jun Rao (JIRA)" <ji...@apache.org> on 2011/07/20 17:54:58 UTC

[jira] [Updated] (KAFKA-50) kafka replication

     [ https://issues.apache.org/jira/browse/KAFKA-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jun Rao updated KAFKA-50:
-------------------------

    Attachment: kafka_replication_lowlevel_design.pdf
                kafka_replication_highlevel_design.pdf

> kafka replication
> -----------------
>
>                 Key: KAFKA-50
>                 URL: https://issues.apache.org/jira/browse/KAFKA-50
>             Project: Kafka
>          Issue Type: New Feature
>         Attachments: kafka_replication_highlevel_design.pdf, kafka_replication_lowlevel_design.pdf
>
>
> Currently, Kafka doesn't have replication. Each log segment is stored in a single broker. This limits both the availability and the durability of Kafka. If a broker goes down, all log segments stored on that broker become unavailable to consumers. If a broker dies permanently (e.g., disk failure), all unconsumed data on that node is lost forever. Our goal is to replicate every log segment to multiple broker nodes to improve both the availability and the durability. 
> We'd like to support the following in Kafka replication: 
> 1. Configurable synchronous and asynchronous replication 
> 2. Small unavailable window (e.g., less than 5 seconds) during broker failures 
> 3. Auto recovery when a failed broker rejoins 
> 4. Balanced load when a broker fails (i.e., the load on the failed broker is evenly spread among multiple surviving brokers)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira