You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Rajiv Kurian <ra...@signalfx.com> on 2016/01/11 22:08:20 UTC

fallout from upgrading to the new Kafka producers

We have recently upgraded some of our applications to use the Kafka 0.8.2
Java producers from the old Java wrappers over Scala producers.

We've noticed these log messages on our application since the upgrade:

2016-01-11T20:56:43.023Z WARN  [roducer-network-thread | producer-2]
[s.o.a.kafka.common.network.Selector ] {}: Error in I/O with
my_kafka_host/some_ip

java.io.EOFException: null

        at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
~[kafka_2.10-0.8.2.2.jar:na]

        at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
~[kafka_2.10-0.8.2.2.jar:na]

        at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
[kafka_2.10-0.8.2.2jar:na]

        at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
[kafka_2.10-0.8.2.2.jar:na]

        at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
[kafka_2.10-0.8.2.2.jar:na]

        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]

They don't occur too often and may be harmless but it is pretty alarming to
see these. It happens with all the brokers we connect to so it doesn't seem
like a problem with a single broker. Our producer config looks a bit like
this:

final Properties config = new Properties();

 config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
OUR_KAFKA_CONNECT_STRING);

 config.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, false);  // non
blocking

 config.put(ProducerConfig.BUFFER_MEMORY_CONFIG,  10 * 1024 * 1024);  // 10
MB

 config.put(ProducerConfig.BATCH_SIZE_CONFIG,  16384);  // 16 KB

 config.put(ProducerConfig.LINGER_MS_CONFIG, 50);  // 50 ms


Thanks,

Rajiv

Re: fallout from upgrading to the new Kafka producers

Posted by Rajiv Kurian <ra...@signalfx.com>.
Thanks!

On Wed, Jan 13, 2016 at 9:06 AM, Guozhang Wang <wa...@gmail.com> wrote:

> Rajiv,
>
> 0.9.0 Java producers have a few minor bug fixes from 0.8.2, and add another
> API function "flush()": https://issues.apache.org/jira/browse/KAFKA-1865
>
> You can look through its changes by just searching "producer" in the
> release notes:
>
> http://mirror.stjschools.org/public/apache/kafka/0.9.0.0/RELEASE_NOTES.html
>
>
> Guozhang
>
>
> On Tue, Jan 12, 2016 at 6:00 PM, Rajiv Kurian <ra...@signalfx.com> wrote:
>
> > Thanks Guozhan. I have upgraded to 0.9.0 now. Are there are any other
> > producer changes to be aware of? My understanding is there were no big
> > producer changes made from 0.8.2 to 0.9.0.
> >
> > Thanks,
> > Rajiv
> >
> > On Mon, Jan 11, 2016 at 5:52 PM, Guozhang Wang <wa...@gmail.com>
> wrote:
> >
> > > Hi Rajiv,
> > >
> > > This warning could be ignored and is indeed done in 0.9.0, where we
> > > downgrade the logging level for it from WARN to DEBUG. So if you
> upgrade
> > to
> > > 0.9.0 Java producer you should not see this warning.
> > >
> > > A bit more context on the EOFException, a socket closure could result
> > this;
> > > and a server could actively close a socket under some cases, for
> example
> > 1)
> > > if it is idle for some time and server would decide to close it based
> on
> > > the idle management config, or 2) if producer use ack=0 and there is an
> > > error processing the request, so server just close the socket to
> "notify"
> > > the client, etc.
> > >
> > > Guozhang
> > >
> > >
> > >
> > >
> > > On Mon, Jan 11, 2016 at 1:08 PM, Rajiv Kurian <ra...@signalfx.com>
> > wrote:
> > >
> > > > We have recently upgraded some of our applications to use the Kafka
> > 0.8.2
> > > > Java producers from the old Java wrappers over Scala producers.
> > > >
> > > > We've noticed these log messages on our application since the
> upgrade:
> > > >
> > > > 2016-01-11T20:56:43.023Z WARN  [roducer-network-thread | producer-2]
> > > > [s.o.a.kafka.common.network.Selector ] {}: Error in I/O with
> > > > my_kafka_host/some_ip
> > > >
> > > > java.io.EOFException: null
> > > >
> > > >         at
> > > >
> > > >
> > >
> >
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> > > > ~[kafka_2.10-0.8.2.2.jar:na]
> > > >
> > > >         at
> > > org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> > > > ~[kafka_2.10-0.8.2.2.jar:na]
> > > >
> > > >         at
> > > > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> > > > [kafka_2.10-0.8.2.2jar:na]
> > > >
> > > >         at
> > > >
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> > > > [kafka_2.10-0.8.2.2.jar:na]
> > > >
> > > >         at
> > > >
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> > > > [kafka_2.10-0.8.2.2.jar:na]
> > > >
> > > >         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
> > > >
> > > > They don't occur too often and may be harmless but it is pretty
> > alarming
> > > to
> > > > see these. It happens with all the brokers we connect to so it
> doesn't
> > > seem
> > > > like a problem with a single broker. Our producer config looks a bit
> > like
> > > > this:
> > > >
> > > > final Properties config = new Properties();
> > > >
> > > >  config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
> > > > OUR_KAFKA_CONNECT_STRING);
> > > >
> > > >  config.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, false);  //
> non
> > > > blocking
> > > >
> > > >  config.put(ProducerConfig.BUFFER_MEMORY_CONFIG,  10 * 1024 * 1024);
> > //
> > > 10
> > > > MB
> > > >
> > > >  config.put(ProducerConfig.BATCH_SIZE_CONFIG,  16384);  // 16 KB
> > > >
> > > >  config.put(ProducerConfig.LINGER_MS_CONFIG, 50);  // 50 ms
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Rajiv
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>

Re: fallout from upgrading to the new Kafka producers

Posted by Guozhang Wang <wa...@gmail.com>.
Rajiv,

0.9.0 Java producers have a few minor bug fixes from 0.8.2, and add another
API function "flush()": https://issues.apache.org/jira/browse/KAFKA-1865

You can look through its changes by just searching "producer" in the
release notes:

http://mirror.stjschools.org/public/apache/kafka/0.9.0.0/RELEASE_NOTES.html


Guozhang


On Tue, Jan 12, 2016 at 6:00 PM, Rajiv Kurian <ra...@signalfx.com> wrote:

> Thanks Guozhan. I have upgraded to 0.9.0 now. Are there are any other
> producer changes to be aware of? My understanding is there were no big
> producer changes made from 0.8.2 to 0.9.0.
>
> Thanks,
> Rajiv
>
> On Mon, Jan 11, 2016 at 5:52 PM, Guozhang Wang <wa...@gmail.com> wrote:
>
> > Hi Rajiv,
> >
> > This warning could be ignored and is indeed done in 0.9.0, where we
> > downgrade the logging level for it from WARN to DEBUG. So if you upgrade
> to
> > 0.9.0 Java producer you should not see this warning.
> >
> > A bit more context on the EOFException, a socket closure could result
> this;
> > and a server could actively close a socket under some cases, for example
> 1)
> > if it is idle for some time and server would decide to close it based on
> > the idle management config, or 2) if producer use ack=0 and there is an
> > error processing the request, so server just close the socket to "notify"
> > the client, etc.
> >
> > Guozhang
> >
> >
> >
> >
> > On Mon, Jan 11, 2016 at 1:08 PM, Rajiv Kurian <ra...@signalfx.com>
> wrote:
> >
> > > We have recently upgraded some of our applications to use the Kafka
> 0.8.2
> > > Java producers from the old Java wrappers over Scala producers.
> > >
> > > We've noticed these log messages on our application since the upgrade:
> > >
> > > 2016-01-11T20:56:43.023Z WARN  [roducer-network-thread | producer-2]
> > > [s.o.a.kafka.common.network.Selector ] {}: Error in I/O with
> > > my_kafka_host/some_ip
> > >
> > > java.io.EOFException: null
> > >
> > >         at
> > >
> > >
> >
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> > > ~[kafka_2.10-0.8.2.2.jar:na]
> > >
> > >         at
> > org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> > > ~[kafka_2.10-0.8.2.2.jar:na]
> > >
> > >         at
> > > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> > > [kafka_2.10-0.8.2.2jar:na]
> > >
> > >         at
> > > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> > > [kafka_2.10-0.8.2.2.jar:na]
> > >
> > >         at
> > > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> > > [kafka_2.10-0.8.2.2.jar:na]
> > >
> > >         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
> > >
> > > They don't occur too often and may be harmless but it is pretty
> alarming
> > to
> > > see these. It happens with all the brokers we connect to so it doesn't
> > seem
> > > like a problem with a single broker. Our producer config looks a bit
> like
> > > this:
> > >
> > > final Properties config = new Properties();
> > >
> > >  config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
> > > OUR_KAFKA_CONNECT_STRING);
> > >
> > >  config.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, false);  // non
> > > blocking
> > >
> > >  config.put(ProducerConfig.BUFFER_MEMORY_CONFIG,  10 * 1024 * 1024);
> //
> > 10
> > > MB
> > >
> > >  config.put(ProducerConfig.BATCH_SIZE_CONFIG,  16384);  // 16 KB
> > >
> > >  config.put(ProducerConfig.LINGER_MS_CONFIG, 50);  // 50 ms
> > >
> > >
> > > Thanks,
> > >
> > > Rajiv
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang

Re: fallout from upgrading to the new Kafka producers

Posted by Rajiv Kurian <ra...@signalfx.com>.
Thanks Guozhan. I have upgraded to 0.9.0 now. Are there are any other
producer changes to be aware of? My understanding is there were no big
producer changes made from 0.8.2 to 0.9.0.

Thanks,
Rajiv

On Mon, Jan 11, 2016 at 5:52 PM, Guozhang Wang <wa...@gmail.com> wrote:

> Hi Rajiv,
>
> This warning could be ignored and is indeed done in 0.9.0, where we
> downgrade the logging level for it from WARN to DEBUG. So if you upgrade to
> 0.9.0 Java producer you should not see this warning.
>
> A bit more context on the EOFException, a socket closure could result this;
> and a server could actively close a socket under some cases, for example 1)
> if it is idle for some time and server would decide to close it based on
> the idle management config, or 2) if producer use ack=0 and there is an
> error processing the request, so server just close the socket to "notify"
> the client, etc.
>
> Guozhang
>
>
>
>
> On Mon, Jan 11, 2016 at 1:08 PM, Rajiv Kurian <ra...@signalfx.com> wrote:
>
> > We have recently upgraded some of our applications to use the Kafka 0.8.2
> > Java producers from the old Java wrappers over Scala producers.
> >
> > We've noticed these log messages on our application since the upgrade:
> >
> > 2016-01-11T20:56:43.023Z WARN  [roducer-network-thread | producer-2]
> > [s.o.a.kafka.common.network.Selector ] {}: Error in I/O with
> > my_kafka_host/some_ip
> >
> > java.io.EOFException: null
> >
> >         at
> >
> >
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> > ~[kafka_2.10-0.8.2.2.jar:na]
> >
> >         at
> org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> > ~[kafka_2.10-0.8.2.2.jar:na]
> >
> >         at
> > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> > [kafka_2.10-0.8.2.2jar:na]
> >
> >         at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> > [kafka_2.10-0.8.2.2.jar:na]
> >
> >         at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> > [kafka_2.10-0.8.2.2.jar:na]
> >
> >         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
> >
> > They don't occur too often and may be harmless but it is pretty alarming
> to
> > see these. It happens with all the brokers we connect to so it doesn't
> seem
> > like a problem with a single broker. Our producer config looks a bit like
> > this:
> >
> > final Properties config = new Properties();
> >
> >  config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
> > OUR_KAFKA_CONNECT_STRING);
> >
> >  config.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, false);  // non
> > blocking
> >
> >  config.put(ProducerConfig.BUFFER_MEMORY_CONFIG,  10 * 1024 * 1024);  //
> 10
> > MB
> >
> >  config.put(ProducerConfig.BATCH_SIZE_CONFIG,  16384);  // 16 KB
> >
> >  config.put(ProducerConfig.LINGER_MS_CONFIG, 50);  // 50 ms
> >
> >
> > Thanks,
> >
> > Rajiv
> >
>
>
>
> --
> -- Guozhang
>

Re: fallout from upgrading to the new Kafka producers

Posted by Guozhang Wang <wa...@gmail.com>.
Hi Rajiv,

This warning could be ignored and is indeed done in 0.9.0, where we
downgrade the logging level for it from WARN to DEBUG. So if you upgrade to
0.9.0 Java producer you should not see this warning.

A bit more context on the EOFException, a socket closure could result this;
and a server could actively close a socket under some cases, for example 1)
if it is idle for some time and server would decide to close it based on
the idle management config, or 2) if producer use ack=0 and there is an
error processing the request, so server just close the socket to "notify"
the client, etc.

Guozhang




On Mon, Jan 11, 2016 at 1:08 PM, Rajiv Kurian <ra...@signalfx.com> wrote:

> We have recently upgraded some of our applications to use the Kafka 0.8.2
> Java producers from the old Java wrappers over Scala producers.
>
> We've noticed these log messages on our application since the upgrade:
>
> 2016-01-11T20:56:43.023Z WARN  [roducer-network-thread | producer-2]
> [s.o.a.kafka.common.network.Selector ] {}: Error in I/O with
> my_kafka_host/some_ip
>
> java.io.EOFException: null
>
>         at
>
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
> ~[kafka_2.10-0.8.2.2.jar:na]
>
>         at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
> ~[kafka_2.10-0.8.2.2.jar:na]
>
>         at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
> [kafka_2.10-0.8.2.2jar:na]
>
>         at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
> [kafka_2.10-0.8.2.2.jar:na]
>
>         at
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
> [kafka_2.10-0.8.2.2.jar:na]
>
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
>
> They don't occur too often and may be harmless but it is pretty alarming to
> see these. It happens with all the brokers we connect to so it doesn't seem
> like a problem with a single broker. Our producer config looks a bit like
> this:
>
> final Properties config = new Properties();
>
>  config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
> OUR_KAFKA_CONNECT_STRING);
>
>  config.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, false);  // non
> blocking
>
>  config.put(ProducerConfig.BUFFER_MEMORY_CONFIG,  10 * 1024 * 1024);  // 10
> MB
>
>  config.put(ProducerConfig.BATCH_SIZE_CONFIG,  16384);  // 16 KB
>
>  config.put(ProducerConfig.LINGER_MS_CONFIG, 50);  // 50 ms
>
>
> Thanks,
>
> Rajiv
>



-- 
-- Guozhang