You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Tech Bolek <te...@yahoo.com.INVALID> on 2016/02/03 03:04:38 UTC

kafka “stops working” after a large message is enqueued

I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With messages ~70 KB everything works fine. However, after the producer enqueues a larger, 70 MB  message, kafka appears to stop delivering the messages to the consumer. I.e. not only is the large message not delivered but also subsequent smaller messages. I know the producer succeeds because I use kafka callback for the confirmation and I can see the messages in the kafka message log.
kafka config custom changes:
    message.max.bytes=200000000    replica.fetch.max.bytes=200000000
consumer config:
 props.put("fetch.message.max.bytes",   "200000000");    props.put("max.partition.fetch.bytes", "200000000");

Re: kafka “stops working” after a large message is enqueued

Posted by Ewen Cheslack-Postava <ew...@confluent.io>.
The default max message size is 1MB. You'll probably need to increase a few
settings -- the topic max message size on a per-topic basis on the broker
(or broker-wide with message.max.bytes), the max.partition.fetch.bytes on
the new consumer, etc. You need to make sure all of the producer, broker,
and consumer settings are sufficiently large to accommodate your settings.

If you increase the maximum message size by too much, even more settings
may come into play -- for example, I think you'll need to increase
buffer.memory for the the producer because you've exceeded it's maximum
size of 32MB.

When you exceed the default message size by that much, you'll definitely
want to review all your settings on producer, broker, and consumers
carefully to make sure they all can handle your requirements.

-Ewen

On Wed, Feb 3, 2016 at 4:38 PM, Joe Lawson <
jlawson@opensourceconnections.com> wrote:

> Interesting. Check and see if you have any errors on the broker during the
> silent client fails. Here is a relevant Google search
> https://www.google.com/search?q=java+heap+size+large+kafka+message
> On Feb 3, 2016 6:06 PM, "Tech Bolek" <te...@yahoo.com.invalid>
> wrote:
>
> > Deleted the topic and recreated (with max bytes set) but that did not
> > help.What helped though is upping the java heap size.I monitored the
> > consumer with jstat. I noticed 2 full garbage collection attempts right
> > after publishing the large message. After that the consumer appeared
> > dormant. Upping the java heap size allowed to consume the message.
> > Wondering why the consumer remained silent, i.e. no out of heap memory
> > error or anything.
> >
> >     On Tuesday, February 2, 2016 8:35 PM, Joe Lawson <
> > jlawson@opensourceconnections.com> wrote:
> >
> >
> >  Make sure the topic is created after message Max bytes is set.
> > On Feb 2, 2016 9:04 PM, "Tech Bolek" <te...@yahoo.com.invalid>
> > wrote:
> >
> > > I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> > > messages ~70 KB everything works fine. However, after the producer
> > enqueues
> > > a larger, 70 MB  message, kafka appears to stop delivering the messages
> > to
> > > the consumer. I.e. not only is the large message not delivered but also
> > > subsequent smaller messages. I know the producer succeeds because I use
> > > kafka callback for the confirmation and I can see the messages in the
> > kafka
> > > message log.
> > > kafka config custom changes:
> > >    message.max.bytes=200000000    replica.fetch.max.bytes=200000000
> > > consumer config:
> > >  props.put("fetch.message.max.bytes",  "200000000");
> > > props.put("max.partition.fetch.bytes", "200000000");
> > >
> >
> >
> >
>



-- 
Thanks,
Ewen

Re: kafka “stops working” after a large message is enqueued

Posted by Joe Lawson <jl...@opensourceconnections.com>.
Interesting. Check and see if you have any errors on the broker during the
silent client fails. Here is a relevant Google search
https://www.google.com/search?q=java+heap+size+large+kafka+message
On Feb 3, 2016 6:06 PM, "Tech Bolek" <te...@yahoo.com.invalid> wrote:

> Deleted the topic and recreated (with max bytes set) but that did not
> help.What helped though is upping the java heap size.I monitored the
> consumer with jstat. I noticed 2 full garbage collection attempts right
> after publishing the large message. After that the consumer appeared
> dormant. Upping the java heap size allowed to consume the message.
> Wondering why the consumer remained silent, i.e. no out of heap memory
> error or anything.
>
>     On Tuesday, February 2, 2016 8:35 PM, Joe Lawson <
> jlawson@opensourceconnections.com> wrote:
>
>
>  Make sure the topic is created after message Max bytes is set.
> On Feb 2, 2016 9:04 PM, "Tech Bolek" <te...@yahoo.com.invalid>
> wrote:
>
> > I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> > messages ~70 KB everything works fine. However, after the producer
> enqueues
> > a larger, 70 MB  message, kafka appears to stop delivering the messages
> to
> > the consumer. I.e. not only is the large message not delivered but also
> > subsequent smaller messages. I know the producer succeeds because I use
> > kafka callback for the confirmation and I can see the messages in the
> kafka
> > message log.
> > kafka config custom changes:
> >    message.max.bytes=200000000    replica.fetch.max.bytes=200000000
> > consumer config:
> >  props.put("fetch.message.max.bytes",  "200000000");
> > props.put("max.partition.fetch.bytes", "200000000");
> >
>
>
>

Re: kafka “stops working” after a large message is enqueued

Posted by Tech Bolek <te...@yahoo.com.INVALID>.
Deleted the topic and recreated (with max bytes set) but that did not help.What helped though is upping the java heap size.I monitored the consumer with jstat. I noticed 2 full garbage collection attempts right after publishing the large message. After that the consumer appeared dormant. Upping the java heap size allowed to consume the message. Wondering why the consumer remained silent, i.e. no out of heap memory error or anything. 

    On Tuesday, February 2, 2016 8:35 PM, Joe Lawson <jl...@opensourceconnections.com> wrote:
 

 Make sure the topic is created after message Max bytes is set.
On Feb 2, 2016 9:04 PM, "Tech Bolek" <te...@yahoo.com.invalid> wrote:

> I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> messages ~70 KB everything works fine. However, after the producer enqueues
> a larger, 70 MB  message, kafka appears to stop delivering the messages to
> the consumer. I.e. not only is the large message not delivered but also
> subsequent smaller messages. I know the producer succeeds because I use
> kafka callback for the confirmation and I can see the messages in the kafka
> message log.
> kafka config custom changes:
>    message.max.bytes=200000000    replica.fetch.max.bytes=200000000
> consumer config:
>  props.put("fetch.message.max.bytes",  "200000000");
> props.put("max.partition.fetch.bytes", "200000000");
>


  

Re: kafka “stops working” after a large message is enqueued

Posted by Joe Lawson <jl...@opensourceconnections.com>.
Make sure the topic is created after message Max bytes is set.
On Feb 2, 2016 9:04 PM, "Tech Bolek" <te...@yahoo.com.invalid> wrote:

> I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> messages ~70 KB everything works fine. However, after the producer enqueues
> a larger, 70 MB  message, kafka appears to stop delivering the messages to
> the consumer. I.e. not only is the large message not delivered but also
> subsequent smaller messages. I know the producer succeeds because I use
> kafka callback for the confirmation and I can see the messages in the kafka
> message log.
> kafka config custom changes:
>     message.max.bytes=200000000    replica.fetch.max.bytes=200000000
> consumer config:
>  props.put("fetch.message.max.bytes",   "200000000");
> props.put("max.partition.fetch.bytes", "200000000");
>