You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "Bae, Jae Hyeon" <me...@gmail.com> on 2014/01/20 07:09:53 UTC

Question on MessageSizeTooLargeException

Hello

I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters are
being tested now.

Today, I got alerted with the following messages:

 "data": {
    "exceptionMessage": "Found a message larger than the maximum fetch size
of this consumer on topic nf_errors_log partition 0 at fetch offset
76736251. Increase the fetch size, or decrease the maximum message size the
broker will allow.",
    "exceptionStackTrace": "kafka.common.MessageSizeTooLargeException:
Found a message larger than the maximum fetch size of this consumer on
topic nf_errors_log partition 0 at fetch offset 76736251. Increase the
fetch size, or decrease the maximum message size the broker will allow.
    "exceptionType": "kafka.common.MessageSizeTooLargeException"
  },
  "description": "RuntimeException aborted realtime
processing[nf_errors_log]"

What I don't understand is, I am using all default properties, which means

broker's message.max.bytes is 1000000
consumer's fetch.message.max.bytes is 1024 * 1024 greater than broker's
message.max.bytes

How could this happen? I am using snappy compression.

Thank you
Best, Jae

Re: Question on MessageSizeTooLargeException

Posted by Jun Rao <ju...@gmail.com>.
Great. Please open a jira and attach your patch there.

Thanks,

Jun


On Mon, Jan 20, 2014 at 10:37 PM, Bae, Jae Hyeon <me...@gmail.com> wrote:

> Nope, just packaging for Netflix cloud environment.
>
> The first one is, producer discovery(metadata.broker.list) is integrated
> with Netflix Eureka.
> The second one is, yammer metric library is connected with Netflix Servo.
> Except these two big things, I fixed a few lines to fit into our monitoring
> environment.
>
> If I have a chance, I will send Pull Request to you.
>
> Thank you
> Best, Jae
>
>
> On Mon, Jan 20, 2014 at 9:03 PM, Jun Rao <ju...@gmail.com> wrote:
>
> > What kind of customization are you performing? Are you changing the wire
> > and on-disk protocols?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Mon, Jan 20, 2014 at 10:02 AM, Bae, Jae Hyeon <me...@gmail.com>
> > wrote:
> >
> > > Due to short retention period, I don't have that log segment now.
> > >
> > > How I am developing kafka is,
> > >
> > > I forked apache/kafka into my personal repo and customized a little
> bit.
> > I
> > > kept tracking 0.8 branch but you seems moved to trunk branch.
> > >
> > > I will update it to trunk branch or 0.8.0 tag.
> > >
> > > Thank you
> > > Best, Jae
> > >
> > >
> > >
> > >
> > > On Mon, Jan 20, 2014 at 8:01 AM, Jun Rao <ju...@gmail.com> wrote:
> > >
> > > > Could you use our DumpLogSegment tool on the relevant log segment and
> > see
> > > > if the log is corrupted? Also, are you using the 0.8.0 release?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <metacret@gmail.com
> >
> > > > wrote:
> > > >
> > > > > Hello
> > > > >
> > > > > I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8
> > clusters
> > > > are
> > > > > being tested now.
> > > > >
> > > > > Today, I got alerted with the following messages:
> > > > >
> > > > >  "data": {
> > > > >     "exceptionMessage": "Found a message larger than the maximum
> > fetch
> > > > size
> > > > > of this consumer on topic nf_errors_log partition 0 at fetch offset
> > > > > 76736251. Increase the fetch size, or decrease the maximum message
> > size
> > > > the
> > > > > broker will allow.",
> > > > >     "exceptionStackTrace":
> > "kafka.common.MessageSizeTooLargeException:
> > > > > Found a message larger than the maximum fetch size of this consumer
> > on
> > > > > topic nf_errors_log partition 0 at fetch offset 76736251. Increase
> > the
> > > > > fetch size, or decrease the maximum message size the broker will
> > allow.
> > > > >     "exceptionType": "kafka.common.MessageSizeTooLargeException"
> > > > >   },
> > > > >   "description": "RuntimeException aborted realtime
> > > > > processing[nf_errors_log]"
> > > > >
> > > > > What I don't understand is, I am using all default properties,
> which
> > > > means
> > > > >
> > > > > broker's message.max.bytes is 1000000
> > > > > consumer's fetch.message.max.bytes is 1024 * 1024 greater than
> > broker's
> > > > > message.max.bytes
> > > > >
> > > > > How could this happen? I am using snappy compression.
> > > > >
> > > > > Thank you
> > > > > Best, Jae
> > > > >
> > > >
> > >
> >
>

Re: Question on MessageSizeTooLargeException

Posted by "Bae, Jae Hyeon" <me...@gmail.com>.
Nope, just packaging for Netflix cloud environment.

The first one is, producer discovery(metadata.broker.list) is integrated
with Netflix Eureka.
The second one is, yammer metric library is connected with Netflix Servo.
Except these two big things, I fixed a few lines to fit into our monitoring
environment.

If I have a chance, I will send Pull Request to you.

Thank you
Best, Jae


On Mon, Jan 20, 2014 at 9:03 PM, Jun Rao <ju...@gmail.com> wrote:

> What kind of customization are you performing? Are you changing the wire
> and on-disk protocols?
>
> Thanks,
>
> Jun
>
>
> On Mon, Jan 20, 2014 at 10:02 AM, Bae, Jae Hyeon <me...@gmail.com>
> wrote:
>
> > Due to short retention period, I don't have that log segment now.
> >
> > How I am developing kafka is,
> >
> > I forked apache/kafka into my personal repo and customized a little bit.
> I
> > kept tracking 0.8 branch but you seems moved to trunk branch.
> >
> > I will update it to trunk branch or 0.8.0 tag.
> >
> > Thank you
> > Best, Jae
> >
> >
> >
> >
> > On Mon, Jan 20, 2014 at 8:01 AM, Jun Rao <ju...@gmail.com> wrote:
> >
> > > Could you use our DumpLogSegment tool on the relevant log segment and
> see
> > > if the log is corrupted? Also, are you using the 0.8.0 release?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <me...@gmail.com>
> > > wrote:
> > >
> > > > Hello
> > > >
> > > > I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8
> clusters
> > > are
> > > > being tested now.
> > > >
> > > > Today, I got alerted with the following messages:
> > > >
> > > >  "data": {
> > > >     "exceptionMessage": "Found a message larger than the maximum
> fetch
> > > size
> > > > of this consumer on topic nf_errors_log partition 0 at fetch offset
> > > > 76736251. Increase the fetch size, or decrease the maximum message
> size
> > > the
> > > > broker will allow.",
> > > >     "exceptionStackTrace":
> "kafka.common.MessageSizeTooLargeException:
> > > > Found a message larger than the maximum fetch size of this consumer
> on
> > > > topic nf_errors_log partition 0 at fetch offset 76736251. Increase
> the
> > > > fetch size, or decrease the maximum message size the broker will
> allow.
> > > >     "exceptionType": "kafka.common.MessageSizeTooLargeException"
> > > >   },
> > > >   "description": "RuntimeException aborted realtime
> > > > processing[nf_errors_log]"
> > > >
> > > > What I don't understand is, I am using all default properties, which
> > > means
> > > >
> > > > broker's message.max.bytes is 1000000
> > > > consumer's fetch.message.max.bytes is 1024 * 1024 greater than
> broker's
> > > > message.max.bytes
> > > >
> > > > How could this happen? I am using snappy compression.
> > > >
> > > > Thank you
> > > > Best, Jae
> > > >
> > >
> >
>

Re: Question on MessageSizeTooLargeException

Posted by Jun Rao <ju...@gmail.com>.
What kind of customization are you performing? Are you changing the wire
and on-disk protocols?

Thanks,

Jun


On Mon, Jan 20, 2014 at 10:02 AM, Bae, Jae Hyeon <me...@gmail.com> wrote:

> Due to short retention period, I don't have that log segment now.
>
> How I am developing kafka is,
>
> I forked apache/kafka into my personal repo and customized a little bit. I
> kept tracking 0.8 branch but you seems moved to trunk branch.
>
> I will update it to trunk branch or 0.8.0 tag.
>
> Thank you
> Best, Jae
>
>
>
>
> On Mon, Jan 20, 2014 at 8:01 AM, Jun Rao <ju...@gmail.com> wrote:
>
> > Could you use our DumpLogSegment tool on the relevant log segment and see
> > if the log is corrupted? Also, are you using the 0.8.0 release?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <me...@gmail.com>
> > wrote:
> >
> > > Hello
> > >
> > > I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters
> > are
> > > being tested now.
> > >
> > > Today, I got alerted with the following messages:
> > >
> > >  "data": {
> > >     "exceptionMessage": "Found a message larger than the maximum fetch
> > size
> > > of this consumer on topic nf_errors_log partition 0 at fetch offset
> > > 76736251. Increase the fetch size, or decrease the maximum message size
> > the
> > > broker will allow.",
> > >     "exceptionStackTrace": "kafka.common.MessageSizeTooLargeException:
> > > Found a message larger than the maximum fetch size of this consumer on
> > > topic nf_errors_log partition 0 at fetch offset 76736251. Increase the
> > > fetch size, or decrease the maximum message size the broker will allow.
> > >     "exceptionType": "kafka.common.MessageSizeTooLargeException"
> > >   },
> > >   "description": "RuntimeException aborted realtime
> > > processing[nf_errors_log]"
> > >
> > > What I don't understand is, I am using all default properties, which
> > means
> > >
> > > broker's message.max.bytes is 1000000
> > > consumer's fetch.message.max.bytes is 1024 * 1024 greater than broker's
> > > message.max.bytes
> > >
> > > How could this happen? I am using snappy compression.
> > >
> > > Thank you
> > > Best, Jae
> > >
> >
>

Re: Question on MessageSizeTooLargeException

Posted by "Bae, Jae Hyeon" <me...@gmail.com>.
Due to short retention period, I don't have that log segment now.

How I am developing kafka is,

I forked apache/kafka into my personal repo and customized a little bit. I
kept tracking 0.8 branch but you seems moved to trunk branch.

I will update it to trunk branch or 0.8.0 tag.

Thank you
Best, Jae




On Mon, Jan 20, 2014 at 8:01 AM, Jun Rao <ju...@gmail.com> wrote:

> Could you use our DumpLogSegment tool on the relevant log segment and see
> if the log is corrupted? Also, are you using the 0.8.0 release?
>
> Thanks,
>
> Jun
>
>
> On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <me...@gmail.com>
> wrote:
>
> > Hello
> >
> > I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters
> are
> > being tested now.
> >
> > Today, I got alerted with the following messages:
> >
> >  "data": {
> >     "exceptionMessage": "Found a message larger than the maximum fetch
> size
> > of this consumer on topic nf_errors_log partition 0 at fetch offset
> > 76736251. Increase the fetch size, or decrease the maximum message size
> the
> > broker will allow.",
> >     "exceptionStackTrace": "kafka.common.MessageSizeTooLargeException:
> > Found a message larger than the maximum fetch size of this consumer on
> > topic nf_errors_log partition 0 at fetch offset 76736251. Increase the
> > fetch size, or decrease the maximum message size the broker will allow.
> >     "exceptionType": "kafka.common.MessageSizeTooLargeException"
> >   },
> >   "description": "RuntimeException aborted realtime
> > processing[nf_errors_log]"
> >
> > What I don't understand is, I am using all default properties, which
> means
> >
> > broker's message.max.bytes is 1000000
> > consumer's fetch.message.max.bytes is 1024 * 1024 greater than broker's
> > message.max.bytes
> >
> > How could this happen? I am using snappy compression.
> >
> > Thank you
> > Best, Jae
> >
>

Re: Question on MessageSizeTooLargeException

Posted by Jun Rao <ju...@gmail.com>.
Could you use our DumpLogSegment tool on the relevant log segment and see
if the log is corrupted? Also, are you using the 0.8.0 release?

Thanks,

Jun


On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <me...@gmail.com> wrote:

> Hello
>
> I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters are
> being tested now.
>
> Today, I got alerted with the following messages:
>
>  "data": {
>     "exceptionMessage": "Found a message larger than the maximum fetch size
> of this consumer on topic nf_errors_log partition 0 at fetch offset
> 76736251. Increase the fetch size, or decrease the maximum message size the
> broker will allow.",
>     "exceptionStackTrace": "kafka.common.MessageSizeTooLargeException:
> Found a message larger than the maximum fetch size of this consumer on
> topic nf_errors_log partition 0 at fetch offset 76736251. Increase the
> fetch size, or decrease the maximum message size the broker will allow.
>     "exceptionType": "kafka.common.MessageSizeTooLargeException"
>   },
>   "description": "RuntimeException aborted realtime
> processing[nf_errors_log]"
>
> What I don't understand is, I am using all default properties, which means
>
> broker's message.max.bytes is 1000000
> consumer's fetch.message.max.bytes is 1024 * 1024 greater than broker's
> message.max.bytes
>
> How could this happen? I am using snappy compression.
>
> Thank you
> Best, Jae
>