You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Stevo Slavić <ss...@gmail.com> on 2015/10/30 19:32:30 UTC

New producer and storing offsets in Kafka - previously committed offsets fetched as uncommitted

Hello Apache Kafka community,

I'm trying to use new producer, from kafka-clients 0.8.2.2, together with
simple consumer to fetch and commit offsets stored in Kafka, and I'm seeing
strange behavior - a committed offset/message gets read multiple times,
offset fetch requests do not always see committed offsets as committed,
likelihood of message being read and committed multiple times seems to
increase with higher load.

Before trying to use new producer, I was using old one, sync mode, and same
consumer code with fetch/commit offsets stored in Kafka - all was well,
offsets/messages once committed were never read again. To get equivalent
behavior, old sync producer guarnatees, but with benefits of batching
multiple produce request together I'm trying to use new producer batch
async support, while blocking produce until response is received or produce
fails, waiting for all ISRs.

Is this a (known) bug or is it by design?

Kind regards,
Stevo Slavic.