You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Monika Garg <ga...@gmail.com> on 2013/10/15 12:46:02 UTC

Kafka and Zookeeper node removal from two nodes Kafka cluster

I have 2 nodes kafka cluster with default.replication.factor=2,is set in
server.properties file.

I removed one node-in removing that node,I killed Kafka process,removed all
the kafka-logs and bundle from that node.

Then I stopped my remaining running node in the cluster and started
again(default.replication.factor is still set to 2 in this node
server.properties file).
I was expecting some error/exception as now I don't have two nodes in my
cluster.But I didn't get any error/exception and my node successfully
started and I am able to create topics on it.

So should the "default.replication.factor" be updated from
"default.replication.factor=2" to "default.replication.factor=1" , in the
remaining running node?

Similarly if there are two external zookeeper
nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and now I
have removed one zookeeper node(host1:port1) from the cluster,So should the
property "zookeeper.connect" be updated from
(zookeeper.connect=host1:port1,host2:port1) to
(zookeeper.connect=host2:port1)?

-- 
*Moniii*

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Jason Rosenberg <jb...@squareup.com>.
Yeah, so it would seem a work around could be to defer full replica
assignment until adequate brokers are available, but in the meantime, allow
topic creation to proceed.

With respect to Joel's point around the possibility for imbalanced
partition assignment if not all replicas are available, this really just
points to the long-term need to have partition rebalancing (and I'd argue
this should be dynamic, and support heterogenous server nodes, where some
can have more capacity than others).  Ultimately, if we want to support
easy horizontal scalability, then it should be possible to quickly add new
nodes to a cluster, and have them in short-order start taking up equal
load, etc.


On Tue, Oct 15, 2013 at 8:57 PM, Jun Rao <ju...@gmail.com> wrote:

> When creating a new topic, we require # live brokers to be equal to or
> larger than # replicas. Without enough brokers, can't complete the replica
> assignment since we can't assign more than 1 replica on the same broker.
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 15, 2013 at 1:47 PM, Jason Rosenberg <jb...@squareup.com> wrote:
>
> > Is there a fundamental reason for not allowing creation of new topics
> while
> > in an under-replicated state?  For systems that use automatic topic
> > creation, it seems like losing a node in this case is akin to the cluster
> > being unavailable, if one of the nodes goes down, etc.
> >
> >
> > On Tue, Oct 15, 2013 at 1:25 PM, Joel Koshy <jj...@gmail.com> wrote:
> >
> > > Steve - that's right. I think Monika wanted clarification on what
> > > would happen if replication factor is two and only one broker is
> > > available. In that case, you won't be able to create new topics with
> > > replication factor two (you should see an AdministrationException
> > > saying the replication factor is larger than available brokers).
> > >
> > > However, you can send messages to available partitions of topics that
> > > have already been created - because the ISR would shrink to only one
> > > replica for those topics - although the cluster would be in an
> > > under-replicated state. This is covered in the documentation
> > > (http://kafka.apache.org/documentation.html#replication) under the
> > > discussion about ISR.
> > >
> > > Thanks,
> > >
> > > Joel
> > >
> > > On Tue, Oct 15, 2013 at 10:19 AM, Steve Morin <st...@stevemorin.com>
> > > wrote:
> > > > If you have a double broker failure with replication factor of 2 and
> > only
> > > > have 2 brokers in the cluster.  Wouldn't every partition be not
> > > available?
> > > >
> > > >
> > > > On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <ju...@gmail.com> wrote:
> > > >
> > > >> If you have double broker failures with a replication factor of 2,
> > some
> > > >> partitions will not be available. When one of the brokers comes
> back,
> > > the
> > > >> partition is made available again (there is potential data loss),
> but
> > > in an
> > > >> under replicated mode. After the second broker comes back, it will
> > > catch up
> > > >> from the other replica and the partition will eventually be fully
> > > >> replicated. There is no need to change the replication factor during
> > > this
> > > >> process.
> > > >>
> > > >> As for ZK, you can always use the full connection string. ZK will
> pick
> > > live
> > > >> servers to establish connections.
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Jun
> > > >>
> > > >>
> > > >> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com>
> > > wrote:
> > > >>
> > > >> > I have 2 nodes kafka cluster with default.replication.factor=2,is
> > set
> > > in
> > > >> > server.properties file.
> > > >> >
> > > >> > I removed one node-in removing that node,I killed Kafka
> > > process,removed
> > > >> all
> > > >> > the kafka-logs and bundle from that node.
> > > >> >
> > > >> > Then I stopped my remaining running node in the cluster and
> started
> > > >> > again(default.replication.factor is still set to 2 in this node
> > > >> > server.properties file).
> > > >> > I was expecting some error/exception as now I don't have two nodes
> > in
> > > my
> > > >> > cluster.But I didn't get any error/exception and my node
> > successfully
> > > >> > started and I am able to create topics on it.
> > > >> >
> > > >> > So should the "default.replication.factor" be updated from
> > > >> > "default.replication.factor=2" to "default.replication.factor=1" ,
> > in
> > > the
> > > >> > remaining running node?
> > > >> >
> > > >> > Similarly if there are two external zookeeper
> > > >> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and
> > > now I
> > > >> > have removed one zookeeper node(host1:port1) from the cluster,So
> > > should
> > > >> the
> > > >> > property "zookeeper.connect" be updated from
> > > >> > (zookeeper.connect=host1:port1,host2:port1) to
> > > >> > (zookeeper.connect=host2:port1)?
> > > >> >
> > > >> > --
> > > >> > *Moniii*
> > > >> >
> > > >>
> > >
> >
>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Jun Rao <ju...@gmail.com>.
When creating a new topic, we require # live brokers to be equal to or
larger than # replicas. Without enough brokers, can't complete the replica
assignment since we can't assign more than 1 replica on the same broker.

Thanks,

Jun


On Tue, Oct 15, 2013 at 1:47 PM, Jason Rosenberg <jb...@squareup.com> wrote:

> Is there a fundamental reason for not allowing creation of new topics while
> in an under-replicated state?  For systems that use automatic topic
> creation, it seems like losing a node in this case is akin to the cluster
> being unavailable, if one of the nodes goes down, etc.
>
>
> On Tue, Oct 15, 2013 at 1:25 PM, Joel Koshy <jj...@gmail.com> wrote:
>
> > Steve - that's right. I think Monika wanted clarification on what
> > would happen if replication factor is two and only one broker is
> > available. In that case, you won't be able to create new topics with
> > replication factor two (you should see an AdministrationException
> > saying the replication factor is larger than available brokers).
> >
> > However, you can send messages to available partitions of topics that
> > have already been created - because the ISR would shrink to only one
> > replica for those topics - although the cluster would be in an
> > under-replicated state. This is covered in the documentation
> > (http://kafka.apache.org/documentation.html#replication) under the
> > discussion about ISR.
> >
> > Thanks,
> >
> > Joel
> >
> > On Tue, Oct 15, 2013 at 10:19 AM, Steve Morin <st...@stevemorin.com>
> > wrote:
> > > If you have a double broker failure with replication factor of 2 and
> only
> > > have 2 brokers in the cluster.  Wouldn't every partition be not
> > available?
> > >
> > >
> > > On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <ju...@gmail.com> wrote:
> > >
> > >> If you have double broker failures with a replication factor of 2,
> some
> > >> partitions will not be available. When one of the brokers comes back,
> > the
> > >> partition is made available again (there is potential data loss), but
> > in an
> > >> under replicated mode. After the second broker comes back, it will
> > catch up
> > >> from the other replica and the partition will eventually be fully
> > >> replicated. There is no need to change the replication factor during
> > this
> > >> process.
> > >>
> > >> As for ZK, you can always use the full connection string. ZK will pick
> > live
> > >> servers to establish connections.
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >>
> > >> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com>
> > wrote:
> > >>
> > >> > I have 2 nodes kafka cluster with default.replication.factor=2,is
> set
> > in
> > >> > server.properties file.
> > >> >
> > >> > I removed one node-in removing that node,I killed Kafka
> > process,removed
> > >> all
> > >> > the kafka-logs and bundle from that node.
> > >> >
> > >> > Then I stopped my remaining running node in the cluster and started
> > >> > again(default.replication.factor is still set to 2 in this node
> > >> > server.properties file).
> > >> > I was expecting some error/exception as now I don't have two nodes
> in
> > my
> > >> > cluster.But I didn't get any error/exception and my node
> successfully
> > >> > started and I am able to create topics on it.
> > >> >
> > >> > So should the "default.replication.factor" be updated from
> > >> > "default.replication.factor=2" to "default.replication.factor=1" ,
> in
> > the
> > >> > remaining running node?
> > >> >
> > >> > Similarly if there are two external zookeeper
> > >> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and
> > now I
> > >> > have removed one zookeeper node(host1:port1) from the cluster,So
> > should
> > >> the
> > >> > property "zookeeper.connect" be updated from
> > >> > (zookeeper.connect=host1:port1,host2:port1) to
> > >> > (zookeeper.connect=host2:port1)?
> > >> >
> > >> > --
> > >> > *Moniii*
> > >> >
> > >>
> >
>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Joel Koshy <jj...@gmail.com>.
That's a good question. Off the top of my head I don't remember any
fundamentally good reason why we don't allow it - apart from:
- broker registration paths are ephemeral so topic creation cannot
succeed when there are insufficient brokers available
- it may be confusing to some users to successfully create a topic and
allow its partition replicas to be assigned to brokers that are
unavailable

One can possibly argue that topic creation should not be permitted
with _any_ unavailable brokers since extended unavailability could
result in imbalanced partition distribution (when the unavailable
brokers are restored).

Anyway, unless I'm missing something obvious I think your point is
worth discussing further in a jira.

Joel

On Tue, Oct 15, 2013 at 1:47 PM, Jason Rosenberg <jb...@squareup.com> wrote:
> Is there a fundamental reason for not allowing creation of new topics while
> in an under-replicated state?  For systems that use automatic topic
> creation, it seems like losing a node in this case is akin to the cluster
> being unavailable, if one of the nodes goes down, etc.
>
>
> On Tue, Oct 15, 2013 at 1:25 PM, Joel Koshy <jj...@gmail.com> wrote:
>
>> Steve - that's right. I think Monika wanted clarification on what
>> would happen if replication factor is two and only one broker is
>> available. In that case, you won't be able to create new topics with
>> replication factor two (you should see an AdministrationException
>> saying the replication factor is larger than available brokers).
>>
>> However, you can send messages to available partitions of topics that
>> have already been created - because the ISR would shrink to only one
>> replica for those topics - although the cluster would be in an
>> under-replicated state. This is covered in the documentation
>> (http://kafka.apache.org/documentation.html#replication) under the
>> discussion about ISR.
>>
>> Thanks,
>>
>> Joel
>>
>> On Tue, Oct 15, 2013 at 10:19 AM, Steve Morin <st...@stevemorin.com>
>> wrote:
>> > If you have a double broker failure with replication factor of 2 and only
>> > have 2 brokers in the cluster.  Wouldn't every partition be not
>> available?
>> >
>> >
>> > On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <ju...@gmail.com> wrote:
>> >
>> >> If you have double broker failures with a replication factor of 2, some
>> >> partitions will not be available. When one of the brokers comes back,
>> the
>> >> partition is made available again (there is potential data loss), but
>> in an
>> >> under replicated mode. After the second broker comes back, it will
>> catch up
>> >> from the other replica and the partition will eventually be fully
>> >> replicated. There is no need to change the replication factor during
>> this
>> >> process.
>> >>
>> >> As for ZK, you can always use the full connection string. ZK will pick
>> live
>> >> servers to establish connections.
>> >>
>> >> Thanks,
>> >>
>> >> Jun
>> >>
>> >>
>> >> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com>
>> wrote:
>> >>
>> >> > I have 2 nodes kafka cluster with default.replication.factor=2,is set
>> in
>> >> > server.properties file.
>> >> >
>> >> > I removed one node-in removing that node,I killed Kafka
>> process,removed
>> >> all
>> >> > the kafka-logs and bundle from that node.
>> >> >
>> >> > Then I stopped my remaining running node in the cluster and started
>> >> > again(default.replication.factor is still set to 2 in this node
>> >> > server.properties file).
>> >> > I was expecting some error/exception as now I don't have two nodes in
>> my
>> >> > cluster.But I didn't get any error/exception and my node successfully
>> >> > started and I am able to create topics on it.
>> >> >
>> >> > So should the "default.replication.factor" be updated from
>> >> > "default.replication.factor=2" to "default.replication.factor=1" , in
>> the
>> >> > remaining running node?
>> >> >
>> >> > Similarly if there are two external zookeeper
>> >> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and
>> now I
>> >> > have removed one zookeeper node(host1:port1) from the cluster,So
>> should
>> >> the
>> >> > property "zookeeper.connect" be updated from
>> >> > (zookeeper.connect=host1:port1,host2:port1) to
>> >> > (zookeeper.connect=host2:port1)?
>> >> >
>> >> > --
>> >> > *Moniii*
>> >> >
>> >>
>>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Jason Rosenberg <jb...@squareup.com>.
Is there a fundamental reason for not allowing creation of new topics while
in an under-replicated state?  For systems that use automatic topic
creation, it seems like losing a node in this case is akin to the cluster
being unavailable, if one of the nodes goes down, etc.


On Tue, Oct 15, 2013 at 1:25 PM, Joel Koshy <jj...@gmail.com> wrote:

> Steve - that's right. I think Monika wanted clarification on what
> would happen if replication factor is two and only one broker is
> available. In that case, you won't be able to create new topics with
> replication factor two (you should see an AdministrationException
> saying the replication factor is larger than available brokers).
>
> However, you can send messages to available partitions of topics that
> have already been created - because the ISR would shrink to only one
> replica for those topics - although the cluster would be in an
> under-replicated state. This is covered in the documentation
> (http://kafka.apache.org/documentation.html#replication) under the
> discussion about ISR.
>
> Thanks,
>
> Joel
>
> On Tue, Oct 15, 2013 at 10:19 AM, Steve Morin <st...@stevemorin.com>
> wrote:
> > If you have a double broker failure with replication factor of 2 and only
> > have 2 brokers in the cluster.  Wouldn't every partition be not
> available?
> >
> >
> > On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <ju...@gmail.com> wrote:
> >
> >> If you have double broker failures with a replication factor of 2, some
> >> partitions will not be available. When one of the brokers comes back,
> the
> >> partition is made available again (there is potential data loss), but
> in an
> >> under replicated mode. After the second broker comes back, it will
> catch up
> >> from the other replica and the partition will eventually be fully
> >> replicated. There is no need to change the replication factor during
> this
> >> process.
> >>
> >> As for ZK, you can always use the full connection string. ZK will pick
> live
> >> servers to establish connections.
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >>
> >> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com>
> wrote:
> >>
> >> > I have 2 nodes kafka cluster with default.replication.factor=2,is set
> in
> >> > server.properties file.
> >> >
> >> > I removed one node-in removing that node,I killed Kafka
> process,removed
> >> all
> >> > the kafka-logs and bundle from that node.
> >> >
> >> > Then I stopped my remaining running node in the cluster and started
> >> > again(default.replication.factor is still set to 2 in this node
> >> > server.properties file).
> >> > I was expecting some error/exception as now I don't have two nodes in
> my
> >> > cluster.But I didn't get any error/exception and my node successfully
> >> > started and I am able to create topics on it.
> >> >
> >> > So should the "default.replication.factor" be updated from
> >> > "default.replication.factor=2" to "default.replication.factor=1" , in
> the
> >> > remaining running node?
> >> >
> >> > Similarly if there are two external zookeeper
> >> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and
> now I
> >> > have removed one zookeeper node(host1:port1) from the cluster,So
> should
> >> the
> >> > property "zookeeper.connect" be updated from
> >> > (zookeeper.connect=host1:port1,host2:port1) to
> >> > (zookeeper.connect=host2:port1)?
> >> >
> >> > --
> >> > *Moniii*
> >> >
> >>
>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Joel Koshy <jj...@gmail.com>.
Steve - that's right. I think Monika wanted clarification on what
would happen if replication factor is two and only one broker is
available. In that case, you won't be able to create new topics with
replication factor two (you should see an AdministrationException
saying the replication factor is larger than available brokers).

However, you can send messages to available partitions of topics that
have already been created - because the ISR would shrink to only one
replica for those topics - although the cluster would be in an
under-replicated state. This is covered in the documentation
(http://kafka.apache.org/documentation.html#replication) under the
discussion about ISR.

Thanks,

Joel

On Tue, Oct 15, 2013 at 10:19 AM, Steve Morin <st...@stevemorin.com> wrote:
> If you have a double broker failure with replication factor of 2 and only
> have 2 brokers in the cluster.  Wouldn't every partition be not available?
>
>
> On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <ju...@gmail.com> wrote:
>
>> If you have double broker failures with a replication factor of 2, some
>> partitions will not be available. When one of the brokers comes back, the
>> partition is made available again (there is potential data loss), but in an
>> under replicated mode. After the second broker comes back, it will catch up
>> from the other replica and the partition will eventually be fully
>> replicated. There is no need to change the replication factor during this
>> process.
>>
>> As for ZK, you can always use the full connection string. ZK will pick live
>> servers to establish connections.
>>
>> Thanks,
>>
>> Jun
>>
>>
>> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com> wrote:
>>
>> > I have 2 nodes kafka cluster with default.replication.factor=2,is set in
>> > server.properties file.
>> >
>> > I removed one node-in removing that node,I killed Kafka process,removed
>> all
>> > the kafka-logs and bundle from that node.
>> >
>> > Then I stopped my remaining running node in the cluster and started
>> > again(default.replication.factor is still set to 2 in this node
>> > server.properties file).
>> > I was expecting some error/exception as now I don't have two nodes in my
>> > cluster.But I didn't get any error/exception and my node successfully
>> > started and I am able to create topics on it.
>> >
>> > So should the "default.replication.factor" be updated from
>> > "default.replication.factor=2" to "default.replication.factor=1" , in the
>> > remaining running node?
>> >
>> > Similarly if there are two external zookeeper
>> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and now I
>> > have removed one zookeeper node(host1:port1) from the cluster,So should
>> the
>> > property "zookeeper.connect" be updated from
>> > (zookeeper.connect=host1:port1,host2:port1) to
>> > (zookeeper.connect=host2:port1)?
>> >
>> > --
>> > *Moniii*
>> >
>>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Steve Morin <st...@stevemorin.com>.
If you have a double broker failure with replication factor of 2 and only
have 2 brokers in the cluster.  Wouldn't every partition be not available?


On Tue, Oct 15, 2013 at 8:48 AM, Jun Rao <ju...@gmail.com> wrote:

> If you have double broker failures with a replication factor of 2, some
> partitions will not be available. When one of the brokers comes back, the
> partition is made available again (there is potential data loss), but in an
> under replicated mode. After the second broker comes back, it will catch up
> from the other replica and the partition will eventually be fully
> replicated. There is no need to change the replication factor during this
> process.
>
> As for ZK, you can always use the full connection string. ZK will pick live
> servers to establish connections.
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com> wrote:
>
> > I have 2 nodes kafka cluster with default.replication.factor=2,is set in
> > server.properties file.
> >
> > I removed one node-in removing that node,I killed Kafka process,removed
> all
> > the kafka-logs and bundle from that node.
> >
> > Then I stopped my remaining running node in the cluster and started
> > again(default.replication.factor is still set to 2 in this node
> > server.properties file).
> > I was expecting some error/exception as now I don't have two nodes in my
> > cluster.But I didn't get any error/exception and my node successfully
> > started and I am able to create topics on it.
> >
> > So should the "default.replication.factor" be updated from
> > "default.replication.factor=2" to "default.replication.factor=1" , in the
> > remaining running node?
> >
> > Similarly if there are two external zookeeper
> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and now I
> > have removed one zookeeper node(host1:port1) from the cluster,So should
> the
> > property "zookeeper.connect" be updated from
> > (zookeeper.connect=host1:port1,host2:port1) to
> > (zookeeper.connect=host2:port1)?
> >
> > --
> > *Moniii*
> >
>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Monika Garg <ga...@gmail.com>.
Thanks for replying..:)
What if the second broker never comes?
On Oct 15, 2013 3:48 PM, "Jun Rao" <ju...@gmail.com> wrote:

> If you have double broker failures with a replication factor of 2, some
> partitions will not be available. When one of the brokers comes back, the
> partition is made available again (there is potential data loss), but in an
> under replicated mode. After the second broker comes back, it will catch up
> from the other replica and the partition will eventually be fully
> replicated. There is no need to change the replication factor during this
> process.
>
> As for ZK, you can always use the full connection string. ZK will pick live
> servers to establish connections.
>
> Thanks,
>
> Jun
>
>
> On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com> wrote:
>
> > I have 2 nodes kafka cluster with default.replication.factor=2,is set in
> > server.properties file.
> >
> > I removed one node-in removing that node,I killed Kafka process,removed
> all
> > the kafka-logs and bundle from that node.
> >
> > Then I stopped my remaining running node in the cluster and started
> > again(default.replication.factor is still set to 2 in this node
> > server.properties file).
> > I was expecting some error/exception as now I don't have two nodes in my
> > cluster.But I didn't get any error/exception and my node successfully
> > started and I am able to create topics on it.
> >
> > So should the "default.replication.factor" be updated from
> > "default.replication.factor=2" to "default.replication.factor=1" , in the
> > remaining running node?
> >
> > Similarly if there are two external zookeeper
> > nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and now I
> > have removed one zookeeper node(host1:port1) from the cluster,So should
> the
> > property "zookeeper.connect" be updated from
> > (zookeeper.connect=host1:port1,host2:port1) to
> > (zookeeper.connect=host2:port1)?
> >
> > --
> > *Moniii*
> >
>

Re: Kafka and Zookeeper node removal from two nodes Kafka cluster

Posted by Jun Rao <ju...@gmail.com>.
If you have double broker failures with a replication factor of 2, some
partitions will not be available. When one of the brokers comes back, the
partition is made available again (there is potential data loss), but in an
under replicated mode. After the second broker comes back, it will catch up
from the other replica and the partition will eventually be fully
replicated. There is no need to change the replication factor during this
process.

As for ZK, you can always use the full connection string. ZK will pick live
servers to establish connections.

Thanks,

Jun


On Tue, Oct 15, 2013 at 3:46 AM, Monika Garg <ga...@gmail.com> wrote:

> I have 2 nodes kafka cluster with default.replication.factor=2,is set in
> server.properties file.
>
> I removed one node-in removing that node,I killed Kafka process,removed all
> the kafka-logs and bundle from that node.
>
> Then I stopped my remaining running node in the cluster and started
> again(default.replication.factor is still set to 2 in this node
> server.properties file).
> I was expecting some error/exception as now I don't have two nodes in my
> cluster.But I didn't get any error/exception and my node successfully
> started and I am able to create topics on it.
>
> So should the "default.replication.factor" be updated from
> "default.replication.factor=2" to "default.replication.factor=1" , in the
> remaining running node?
>
> Similarly if there are two external zookeeper
> nodes(zookeeper.connect=host1:port1,host2:port1) in my cluster and now I
> have removed one zookeeper node(host1:port1) from the cluster,So should the
> property "zookeeper.connect" be updated from
> (zookeeper.connect=host1:port1,host2:port1) to
> (zookeeper.connect=host2:port1)?
>
> --
> *Moniii*
>