You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Romain Pelissier <ro...@gmail.com> on 2018/03/26 19:39:36 UTC

Kafka replication

Hi all!
Ok I was playing those days with replication for a specific purpose:having
a failover solution with 2 physical servers.
So the idea is to have 2 zookeeper (ZK) nodes on 1 box and 1 ZP on the 2nd
node. Then 3 Kafka brokers on each nodes (so 6 in total).
I want to have a fail over issue in case the box1 fails so I still have my
data into my replicated node.

But it seems far more complex that I thought... I first think that forcing
the replica of my partition on box1 to be replicated to box2 should be
enough but I am not sure it's going to work as expected.

I see that confluent as a replicator tool (not sure if it is open source or
not) and also the ureplicator from uber (https://github.com/uber/uReplicator)
that could fit my needs.

Can someone share with me how can I achieve this fail over thing based on
your own experiment?

Thanks!
R.

Re: Kafka replication

Posted by Christophe Schmitz <ch...@instaclustr.com>.
Hi Romain,

Zookeeper needs to achieve quorum to function. This means that a strict
majority of nodes need to be alive. If your box1 dies, you lose 2 zookeeper
services, with only one left. Your system is already offline. So you would
need to spread your zookeeper services across 3 physical nodes. That way,
if one node dies, your system (kafka + Zookeeper) still function correctly.
A single Kafka cluster is HA by design, you don't need a secondary Kafka
cluster. The only reason why you might need a secondary kafka cluster would
be:
1. You are afraid that the whole data center can goes down, and you want to
replicate your kafka cluster to another physical data center
2. You want low latency across the globe, and want a Kafka cluster in
multiple geographical regions.

Are you in one of those cases?

If not, just spread 3 kafka brokers in your three physical nodes, make sure
you configure your topic to be replicated, and you will achieve HA without
the need of a secondary Kafka cluster.

Cheers,

Christophe



On 27 March 2018 at 06:39, Romain Pelissier <ro...@gmail.com>
wrote:

> Hi all!
> Ok I was playing those days with replication for a specific purpose:having
> a failover solution with 2 physical servers.
> So the idea is to have 2 zookeeper (ZK) nodes on 1 box and 1 ZP on the 2nd
> node. Then 3 Kafka brokers on each nodes (so 6 in total).
> I want to have a fail over issue in case the box1 fails so I still have my
> data into my replicated node.
>
> But it seems far more complex that I thought... I first think that forcing
> the replica of my partition on box1 to be replicated to box2 should be
> enough but I am not sure it's going to work as expected.
>
> I see that confluent as a replicator tool (not sure if it is open source or
> not) and also the ureplicator from uber (https://github.com/uber/
> uReplicator)
> that could fit my needs.
>
> Can someone share with me how can I achieve this fail over thing based on
> your own experiment?
>
> Thanks!
> R.
>



-- 

*Christophe Schmitz - **VP Consulting*

AU: +61 4 03751980 / FR: +33 7 82022899

<https://www.facebook.com/instaclustr>   <https://twitter.com/instaclustr>
<https://www.linkedin.com/company/instaclustr>

Read our latest technical blog posts here
<https://www.instaclustr.com/blog/>. This email has been sent on behalf
of Instaclustr Pty. Limited (Australia) and Instaclustr Inc (USA). This
email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.