You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Suyog Rao <su...@gmail.com> on 2016/01/12 03:47:34 UTC

Stalling behaviour with 0.9 console consumer

Hi, I started with a clean install of 0.9 Kafka broker and populated a test
topic with 1 million messages. I then used the console consumer to read
from beginning offset. Using --new-consumer reads the messages, but it
stalls after every x number of messages or so, and then continues again. It
is very batchy in its behaviour. If I go back to the old consumer, I am
able to stream the messages continuously. Am I missing a timeout setting or
something?

I created my own consumer in Java and call poll(0) in a loop, but I still
get the same behaviour. This is on Mac OS X (yosemite) with java version
"1.8.0_65".

Any ideas?

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
apache_logs --from-beginning --new-consumer

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
apache_logs --from-beginning -zookeeper localhost:2181

Re: Stalling behaviour with 0.9 console consumer

Posted by Suyog Rao <su...@gmail.com>.
Hi Gerard, I am not sure why min.fetch.bytes setting will cause a pause.
Also, in 0.9 config, there is no min.fetch.bytes. The only bytes related
setting is MAX_PARTITION_FETCH_BYTES_CONFIG. See:
https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/ConsumerConfig.html


It definitely looks like a timeout is triggered somewhere.

Anyone else run into this issue with 0.9 console consumer?

On Tue, Jan 12, 2016 at 8:01 AM, Brice Dutheil <br...@gmail.com>
wrote:

> Hi Gerard,
>
> Why the fetch size should be correlated to the consumer stalling after x
> messages.
>
> One can st the fetch size on a cassandra query, and yet there's no
> "stalling", it's more or less just another "page".
>
>
>
> Cheers,
> -- Brice
>
> On Tue, Jan 12, 2016 at 12:10 PM, Gerard Klijs <ge...@dizzit.com>
> wrote:
>
> > Hi Suyog,
> > It working as intended. You could set the property min.fetch.bytes to a
> > small value to get less messages in each batch. Setting it to zero will
> > probably mean you get one object with each batch, at least was the case
> > when I tried, but I was producing and consuming at the same time.
> >
> > On Tue, Jan 12, 2016 at 3:47 AM Suyog Rao <su...@gmail.com> wrote:
> >
> > > Hi, I started with a clean install of 0.9 Kafka broker and populated a
> > test
> > > topic with 1 million messages. I then used the console consumer to read
> > > from beginning offset. Using --new-consumer reads the messages, but it
> > > stalls after every x number of messages or so, and then continues
> again.
> > It
> > > is very batchy in its behaviour. If I go back to the old consumer, I am
> > > able to stream the messages continuously. Am I missing a timeout
> setting
> > or
> > > something?
> > >
> > > I created my own consumer in Java and call poll(0) in a loop, but I
> still
> > > get the same behaviour. This is on Mac OS X (yosemite) with java
> version
> > > "1.8.0_65".
> > >
> > > Any ideas?
> > >
> > > bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > > apache_logs --from-beginning --new-consumer
> > >
> > > bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > > apache_logs --from-beginning -zookeeper localhost:2181
> > >
> >
>

Re: Stalling behaviour with 0.9 console consumer

Posted by Brice Dutheil <br...@gmail.com>.
Hi Gerard,

Why the fetch size should be correlated to the consumer stalling after x
messages.

One can st the fetch size on a cassandra query, and yet there's no
"stalling", it's more or less just another "page".



Cheers,
-- Brice

On Tue, Jan 12, 2016 at 12:10 PM, Gerard Klijs <ge...@dizzit.com>
wrote:

> Hi Suyog,
> It working as intended. You could set the property min.fetch.bytes to a
> small value to get less messages in each batch. Setting it to zero will
> probably mean you get one object with each batch, at least was the case
> when I tried, but I was producing and consuming at the same time.
>
> On Tue, Jan 12, 2016 at 3:47 AM Suyog Rao <su...@gmail.com> wrote:
>
> > Hi, I started with a clean install of 0.9 Kafka broker and populated a
> test
> > topic with 1 million messages. I then used the console consumer to read
> > from beginning offset. Using --new-consumer reads the messages, but it
> > stalls after every x number of messages or so, and then continues again.
> It
> > is very batchy in its behaviour. If I go back to the old consumer, I am
> > able to stream the messages continuously. Am I missing a timeout setting
> or
> > something?
> >
> > I created my own consumer in Java and call poll(0) in a loop, but I still
> > get the same behaviour. This is on Mac OS X (yosemite) with java version
> > "1.8.0_65".
> >
> > Any ideas?
> >
> > bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > apache_logs --from-beginning --new-consumer
> >
> > bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > apache_logs --from-beginning -zookeeper localhost:2181
> >
>

Re: Stalling behaviour with 0.9 console consumer

Posted by Gerard Klijs <ge...@dizzit.com>.
Hi Suyog,
It working as intended. You could set the property min.fetch.bytes to a
small value to get less messages in each batch. Setting it to zero will
probably mean you get one object with each batch, at least was the case
when I tried, but I was producing and consuming at the same time.

On Tue, Jan 12, 2016 at 3:47 AM Suyog Rao <su...@gmail.com> wrote:

> Hi, I started with a clean install of 0.9 Kafka broker and populated a test
> topic with 1 million messages. I then used the console consumer to read
> from beginning offset. Using --new-consumer reads the messages, but it
> stalls after every x number of messages or so, and then continues again. It
> is very batchy in its behaviour. If I go back to the old consumer, I am
> able to stream the messages continuously. Am I missing a timeout setting or
> something?
>
> I created my own consumer in Java and call poll(0) in a loop, but I still
> get the same behaviour. This is on Mac OS X (yosemite) with java version
> "1.8.0_65".
>
> Any ideas?
>
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> apache_logs --from-beginning --new-consumer
>
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> apache_logs --from-beginning -zookeeper localhost:2181
>