You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Can Zhang <ca...@canx.me> on 2020/06/24 03:22:25 UTC

Kafka still writable when only one process available

Hello,

I'm doing some tests about behaviors of Kafka under faulty circumstances.

Here are my configs(one of total 3, comments removed):

broker.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.0.8.233:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
default.replication.factor=3
replication.factor=3
acks=all
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
min.insync.replicas = 2
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=3000

I created a topic with 1 partition and 3 replicas, and produce/consume
with those shell tools shipped with Kafka. And I find even if I kill 2
processes of Kafka with `kill -9`, the topic is still writable. I
believe this would cause potential data loss.

I don't know if I misconfigured something, could someone review it for
me? I'm testing with kafka_2.12-2.5.0



Best,
Can Zhang

Re: Kafka still writable when only one process available

Posted by Can Zhang <ca...@canx.me>.
Hi Liam,

Thanks for your quick reply, and great detailed explanation!

When starting console producer with "--request-required-acks all", I
do see "Messages are rejected since there are fewer in-sync replicas
than required" in log.


Best,
Can Zhang


On Wed, Jun 24, 2020 at 11:33 AM Liam Clarke-Hutchinson
<li...@adscale.co.nz> wrote:
>
> Hi Can,
>
> acks is a producer setting only, setting it to all on the broker will have
> no effect. The default acks for a producer is 1, which means so long as the
> partition leader acknowledges the write, it's successful. You have three
> replicas, two downed brokers leaves 1 replica (which become the leader if
> not already, assuming it was in-sync prior), so the producer will receive
> that 1 ack and consider the write completed.
>
> If you set acks to all on the producer, then it will throw an exception
> when attempting a write when there's not enough insync replicas to
> acknowledge the write.
>
> Hope that makes sense?
>
> Kind regards,
>
> Liam Clarke-Hutchinson
>
> On Wed, Jun 24, 2020 at 3:22 PM Can Zhang <ca...@canx.me> wrote:
>
> > Hello,
> >
> > I'm doing some tests about behaviors of Kafka under faulty circumstances.
> >
> > Here are my configs(one of total 3, comments removed):
> >
> > broker.id=0
> > listeners=PLAINTEXT://:9092
> > advertised.listeners=PLAINTEXT://10.0.8.233:9092
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > log.dirs=/tmp/kafka-logs
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > default.replication.factor=3
> > replication.factor=3
> > acks=all
> > offsets.topic.replication.factor=3
> > transaction.state.log.replication.factor=3
> > transaction.state.log.min.isr=2
> > min.insync.replicas = 2
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=300000
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=18000
> > group.initial.rebalance.delay.ms=3000
> >
> > I created a topic with 1 partition and 3 replicas, and produce/consume
> > with those shell tools shipped with Kafka. And I find even if I kill 2
> > processes of Kafka with `kill -9`, the topic is still writable. I
> > believe this would cause potential data loss.
> >
> > I don't know if I misconfigured something, could someone review it for
> > me? I'm testing with kafka_2.12-2.5.0
> >
> >
> >
> > Best,
> > Can Zhang
> >

Re: Kafka still writable when only one process available

Posted by Liam Clarke-Hutchinson <li...@adscale.co.nz>.
Hi Can,

acks is a producer setting only, setting it to all on the broker will have
no effect. The default acks for a producer is 1, which means so long as the
partition leader acknowledges the write, it's successful. You have three
replicas, two downed brokers leaves 1 replica (which become the leader if
not already, assuming it was in-sync prior), so the producer will receive
that 1 ack and consider the write completed.

If you set acks to all on the producer, then it will throw an exception
when attempting a write when there's not enough insync replicas to
acknowledge the write.

Hope that makes sense?

Kind regards,

Liam Clarke-Hutchinson

On Wed, Jun 24, 2020 at 3:22 PM Can Zhang <ca...@canx.me> wrote:

> Hello,
>
> I'm doing some tests about behaviors of Kafka under faulty circumstances.
>
> Here are my configs(one of total 3, comments removed):
>
> broker.id=0
> listeners=PLAINTEXT://:9092
> advertised.listeners=PLAINTEXT://10.0.8.233:9092
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/tmp/kafka-logs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> default.replication.factor=3
> replication.factor=3
> acks=all
> offsets.topic.replication.factor=3
> transaction.state.log.replication.factor=3
> transaction.state.log.min.isr=2
> min.insync.replicas = 2
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=300000
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=18000
> group.initial.rebalance.delay.ms=3000
>
> I created a topic with 1 partition and 3 replicas, and produce/consume
> with those shell tools shipped with Kafka. And I find even if I kill 2
> processes of Kafka with `kill -9`, the topic is still writable. I
> believe this would cause potential data loss.
>
> I don't know if I misconfigured something, could someone review it for
> me? I'm testing with kafka_2.12-2.5.0
>
>
>
> Best,
> Can Zhang
>