You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Gerrit Jansen van Vuuren <ge...@gmail.com> on 2014/01/01 13:24:57 UTC

java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

While consuming from the topics I get an IlegalArgumentException and all
consumption stops, the error keeps on throwing.

I've tracked it down to FectchResponse.scala line 33

The error happens when the FetchResponsePartitionData object's readFrom
method calls:
messageSetBuffer.limit(messageSetSize)

I put in some debug code the the messageSetSize is 671758648, while the
buffer.capacity() gives 155733313, for some reason the buffer is smaller
than the required message size.

I don't know the consumer code enough to debug this. It doesn't matter if
compression is used or not.

I've created a jira ticket for this:
https://issues.apache.org/jira/browse/KAFKA-1196

this is a real pain for me because I'm unable to consume from kafka at all
:(


Any ideas on possible config? or code changes I could try to fix?

Regards,
 Gerrit

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Jun Rao <ju...@gmail.com>.
If a broker is the leader of multiple partitions of a topic, the high level
consumer will fetch all those partitions in a single fetch request. Then
the aggregate of the fetched data from multiple partitions could be more
than 2GB.

You can try using more consumers in the same consumer group to reduce #
partitions fetched per consumer.

Thanks,

Jun


On Thu, Jan 2, 2014 at 8:52 AM, Gerrit Jansen van Vuuren <
gerritjvv@gmail.com> wrote:

> no I can't :(,  I upped it because some of the messages can be big. The
> question still remains that 600mb is far from the 2gig int limit, is there
> any reason why 600mb max size would cause the fecth buffer to overflow?
>
>
>
> On Thu, Jan 2, 2014 at 5:19 PM, Jun Rao <ju...@gmail.com> wrote:
>
> > Could you reduce the max message size? Do you really expect to have a
> > single message of 600MB? After that, you can reduce the fetch size.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Jan 2, 2014 at 8:06 AM, Gerrit Jansen van Vuuren <
> > gerritjvv@gmail.com> wrote:
> >
> > > There is a particular topic that has allot of data in each message,
> there
> > > is nothing I can do about it.
> > > Because I have so much data I try to split the data over 8-12
> partitions,
> > > if I reduce the partitions I won't have enough consumers to consume the
> > > data in time.
> > >
> > >
> > > On Thu, Jan 2, 2014 at 4:50 PM, Jun Rao <ju...@gmail.com> wrote:
> > >
> > > > 600mb for fetch size is considerably larger than the default size. Is
> > > there
> > > > a particular reason for this? Also, how many partitions do you have?
> > You
> > > > may have to reduce the fetch size further if there are multiple
> > > partitions.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
> > > > gerritjvv@gmail.com> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I just double checked my configuration and the broker has
> > > > message.max.bytes
> > > > > set to 1 gig, the consumers have the same setting for max fetch
> size.
> > > > I've
> > > > > lowered this to 600 mb and still see the same error :(,
> > > > >
> > > > > at the moment kafka is un-usable for me, the the only other
> > alternative
> > > > is
> > > > > writing my own client (as i'm doing with the producer), what a
> pain!
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <ju...@gmail.com> wrote:
> > > > >
> > > > > > In our wire protocol, we expect the first 4 bytes for a response
> to
> > > be
> > > > > its
> > > > > > size. If the actual size is larger than 2GB, what's stored in the
> > > > those 4
> > > > > > bytes is the overflowed value. This could cause some of the
> buffer
> > > size
> > > > > to
> > > > > > be smaller than it should be later on. If #partitions *
> fetch_size
> > >  is
> > > > > > larger than 2GB in a single fetch request, you could hit this
> > > problem.
> > > > > You
> > > > > > can try reducing the fetch size. Ideally, the sender should catch
> > > this
> > > > > and
> > > > > > throw an exception, which we don't do currently.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > >
> > > > > > On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> > > > > > gerritjvv@gmail.com> wrote:
> > > > > >
> > > > > > > Mm... Could be Im not sure if in a single request though. I am
> > > moving
> > > > > > allot
> > > > > > > of data. Any pointer at were in the code the overflow might
> > start?
> > > > > > > On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
> > > > > > >
> > > > > > > > Are you fetching more than 2GB of data in a single fetch
> > response
> > > > > > (across
> > > > > > > > all partitions)? Currently, we don't handle integer overflow
> > > > > properly.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Jun
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > > > > > > gerritjvv@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > While consuming from the topics I get an
> > > IlegalArgumentException
> > > > > and
> > > > > > > all
> > > > > > > > > consumption stops, the error keeps on throwing.
> > > > > > > > >
> > > > > > > > > I've tracked it down to FectchResponse.scala line 33
> > > > > > > > >
> > > > > > > > > The error happens when the FetchResponsePartitionData
> > object's
> > > > > > readFrom
> > > > > > > > > method calls:
> > > > > > > > > messageSetBuffer.limit(messageSetSize)
> > > > > > > > >
> > > > > > > > > I put in some debug code the the messageSetSize is
> 671758648,
> > > > while
> > > > > > the
> > > > > > > > > buffer.capacity() gives 155733313, for some reason the
> buffer
> > > is
> > > > > > > smaller
> > > > > > > > > than the required message size.
> > > > > > > > >
> > > > > > > > > I don't know the consumer code enough to debug this. It
> > doesn't
> > > > > > matter
> > > > > > > if
> > > > > > > > > compression is used or not.
> > > > > > > > >
> > > > > > > > > I've created a jira ticket for this:
> > > > > > > > > https://issues.apache.org/jira/browse/KAFKA-1196
> > > > > > > > >
> > > > > > > > > this is a real pain for me because I'm unable to consume
> from
> > > > kafka
> > > > > > at
> > > > > > > > all
> > > > > > > > > :(
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Any ideas on possible config? or code changes I could try
> to
> > > fix?
> > > > > > > > >
> > > > > > > > > Regards,
> > > > > > > > >  Gerrit
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Gerrit Jansen van Vuuren <ge...@gmail.com>.
no I can't :(,  I upped it because some of the messages can be big. The
question still remains that 600mb is far from the 2gig int limit, is there
any reason why 600mb max size would cause the fecth buffer to overflow?



On Thu, Jan 2, 2014 at 5:19 PM, Jun Rao <ju...@gmail.com> wrote:

> Could you reduce the max message size? Do you really expect to have a
> single message of 600MB? After that, you can reduce the fetch size.
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 2, 2014 at 8:06 AM, Gerrit Jansen van Vuuren <
> gerritjvv@gmail.com> wrote:
>
> > There is a particular topic that has allot of data in each message, there
> > is nothing I can do about it.
> > Because I have so much data I try to split the data over 8-12 partitions,
> > if I reduce the partitions I won't have enough consumers to consume the
> > data in time.
> >
> >
> > On Thu, Jan 2, 2014 at 4:50 PM, Jun Rao <ju...@gmail.com> wrote:
> >
> > > 600mb for fetch size is considerably larger than the default size. Is
> > there
> > > a particular reason for this? Also, how many partitions do you have?
> You
> > > may have to reduce the fetch size further if there are multiple
> > partitions.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
> > > gerritjvv@gmail.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > I just double checked my configuration and the broker has
> > > message.max.bytes
> > > > set to 1 gig, the consumers have the same setting for max fetch size.
> > > I've
> > > > lowered this to 600 mb and still see the same error :(,
> > > >
> > > > at the moment kafka is un-usable for me, the the only other
> alternative
> > > is
> > > > writing my own client (as i'm doing with the producer), what a pain!
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <ju...@gmail.com> wrote:
> > > >
> > > > > In our wire protocol, we expect the first 4 bytes for a response to
> > be
> > > > its
> > > > > size. If the actual size is larger than 2GB, what's stored in the
> > > those 4
> > > > > bytes is the overflowed value. This could cause some of the buffer
> > size
> > > > to
> > > > > be smaller than it should be later on. If #partitions * fetch_size
> >  is
> > > > > larger than 2GB in a single fetch request, you could hit this
> > problem.
> > > > You
> > > > > can try reducing the fetch size. Ideally, the sender should catch
> > this
> > > > and
> > > > > throw an exception, which we don't do currently.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > > On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> > > > > gerritjvv@gmail.com> wrote:
> > > > >
> > > > > > Mm... Could be Im not sure if in a single request though. I am
> > moving
> > > > > allot
> > > > > > of data. Any pointer at were in the code the overflow might
> start?
> > > > > > On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
> > > > > >
> > > > > > > Are you fetching more than 2GB of data in a single fetch
> response
> > > > > (across
> > > > > > > all partitions)? Currently, we don't handle integer overflow
> > > > properly.
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Jun
> > > > > > >
> > > > > > >
> > > > > > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > > > > > gerritjvv@gmail.com> wrote:
> > > > > > >
> > > > > > > > While consuming from the topics I get an
> > IlegalArgumentException
> > > > and
> > > > > > all
> > > > > > > > consumption stops, the error keeps on throwing.
> > > > > > > >
> > > > > > > > I've tracked it down to FectchResponse.scala line 33
> > > > > > > >
> > > > > > > > The error happens when the FetchResponsePartitionData
> object's
> > > > > readFrom
> > > > > > > > method calls:
> > > > > > > > messageSetBuffer.limit(messageSetSize)
> > > > > > > >
> > > > > > > > I put in some debug code the the messageSetSize is 671758648,
> > > while
> > > > > the
> > > > > > > > buffer.capacity() gives 155733313, for some reason the buffer
> > is
> > > > > > smaller
> > > > > > > > than the required message size.
> > > > > > > >
> > > > > > > > I don't know the consumer code enough to debug this. It
> doesn't
> > > > > matter
> > > > > > if
> > > > > > > > compression is used or not.
> > > > > > > >
> > > > > > > > I've created a jira ticket for this:
> > > > > > > > https://issues.apache.org/jira/browse/KAFKA-1196
> > > > > > > >
> > > > > > > > this is a real pain for me because I'm unable to consume from
> > > kafka
> > > > > at
> > > > > > > all
> > > > > > > > :(
> > > > > > > >
> > > > > > > >
> > > > > > > > Any ideas on possible config? or code changes I could try to
> > fix?
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > >  Gerrit
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Jun Rao <ju...@gmail.com>.
Could you reduce the max message size? Do you really expect to have a
single message of 600MB? After that, you can reduce the fetch size.

Thanks,

Jun


On Thu, Jan 2, 2014 at 8:06 AM, Gerrit Jansen van Vuuren <
gerritjvv@gmail.com> wrote:

> There is a particular topic that has allot of data in each message, there
> is nothing I can do about it.
> Because I have so much data I try to split the data over 8-12 partitions,
> if I reduce the partitions I won't have enough consumers to consume the
> data in time.
>
>
> On Thu, Jan 2, 2014 at 4:50 PM, Jun Rao <ju...@gmail.com> wrote:
>
> > 600mb for fetch size is considerably larger than the default size. Is
> there
> > a particular reason for this? Also, how many partitions do you have? You
> > may have to reduce the fetch size further if there are multiple
> partitions.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
> > gerritjvv@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I just double checked my configuration and the broker has
> > message.max.bytes
> > > set to 1 gig, the consumers have the same setting for max fetch size.
> > I've
> > > lowered this to 600 mb and still see the same error :(,
> > >
> > > at the moment kafka is un-usable for me, the the only other alternative
> > is
> > > writing my own client (as i'm doing with the producer), what a pain!
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <ju...@gmail.com> wrote:
> > >
> > > > In our wire protocol, we expect the first 4 bytes for a response to
> be
> > > its
> > > > size. If the actual size is larger than 2GB, what's stored in the
> > those 4
> > > > bytes is the overflowed value. This could cause some of the buffer
> size
> > > to
> > > > be smaller than it should be later on. If #partitions * fetch_size
>  is
> > > > larger than 2GB in a single fetch request, you could hit this
> problem.
> > > You
> > > > can try reducing the fetch size. Ideally, the sender should catch
> this
> > > and
> > > > throw an exception, which we don't do currently.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> > > > gerritjvv@gmail.com> wrote:
> > > >
> > > > > Mm... Could be Im not sure if in a single request though. I am
> moving
> > > > allot
> > > > > of data. Any pointer at were in the code the overflow might start?
> > > > > On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
> > > > >
> > > > > > Are you fetching more than 2GB of data in a single fetch response
> > > > (across
> > > > > > all partitions)? Currently, we don't handle integer overflow
> > > properly.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > >
> > > > > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > > > > gerritjvv@gmail.com> wrote:
> > > > > >
> > > > > > > While consuming from the topics I get an
> IlegalArgumentException
> > > and
> > > > > all
> > > > > > > consumption stops, the error keeps on throwing.
> > > > > > >
> > > > > > > I've tracked it down to FectchResponse.scala line 33
> > > > > > >
> > > > > > > The error happens when the FetchResponsePartitionData object's
> > > > readFrom
> > > > > > > method calls:
> > > > > > > messageSetBuffer.limit(messageSetSize)
> > > > > > >
> > > > > > > I put in some debug code the the messageSetSize is 671758648,
> > while
> > > > the
> > > > > > > buffer.capacity() gives 155733313, for some reason the buffer
> is
> > > > > smaller
> > > > > > > than the required message size.
> > > > > > >
> > > > > > > I don't know the consumer code enough to debug this. It doesn't
> > > > matter
> > > > > if
> > > > > > > compression is used or not.
> > > > > > >
> > > > > > > I've created a jira ticket for this:
> > > > > > > https://issues.apache.org/jira/browse/KAFKA-1196
> > > > > > >
> > > > > > > this is a real pain for me because I'm unable to consume from
> > kafka
> > > > at
> > > > > > all
> > > > > > > :(
> > > > > > >
> > > > > > >
> > > > > > > Any ideas on possible config? or code changes I could try to
> fix?
> > > > > > >
> > > > > > > Regards,
> > > > > > >  Gerrit
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Gerrit Jansen van Vuuren <ge...@gmail.com>.
There is a particular topic that has allot of data in each message, there
is nothing I can do about it.
Because I have so much data I try to split the data over 8-12 partitions,
if I reduce the partitions I won't have enough consumers to consume the
data in time.


On Thu, Jan 2, 2014 at 4:50 PM, Jun Rao <ju...@gmail.com> wrote:

> 600mb for fetch size is considerably larger than the default size. Is there
> a particular reason for this? Also, how many partitions do you have? You
> may have to reduce the fetch size further if there are multiple partitions.
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
> gerritjvv@gmail.com> wrote:
>
> > Hi,
> >
> > I just double checked my configuration and the broker has
> message.max.bytes
> > set to 1 gig, the consumers have the same setting for max fetch size.
> I've
> > lowered this to 600 mb and still see the same error :(,
> >
> > at the moment kafka is un-usable for me, the the only other alternative
> is
> > writing my own client (as i'm doing with the producer), what a pain!
> >
> >
> >
> >
> >
> > On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <ju...@gmail.com> wrote:
> >
> > > In our wire protocol, we expect the first 4 bytes for a response to be
> > its
> > > size. If the actual size is larger than 2GB, what's stored in the
> those 4
> > > bytes is the overflowed value. This could cause some of the buffer size
> > to
> > > be smaller than it should be later on. If #partitions * fetch_size  is
> > > larger than 2GB in a single fetch request, you could hit this problem.
> > You
> > > can try reducing the fetch size. Ideally, the sender should catch this
> > and
> > > throw an exception, which we don't do currently.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> > > gerritjvv@gmail.com> wrote:
> > >
> > > > Mm... Could be Im not sure if in a single request though. I am moving
> > > allot
> > > > of data. Any pointer at were in the code the overflow might start?
> > > > On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
> > > >
> > > > > Are you fetching more than 2GB of data in a single fetch response
> > > (across
> > > > > all partitions)? Currently, we don't handle integer overflow
> > properly.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > > > gerritjvv@gmail.com> wrote:
> > > > >
> > > > > > While consuming from the topics I get an IlegalArgumentException
> > and
> > > > all
> > > > > > consumption stops, the error keeps on throwing.
> > > > > >
> > > > > > I've tracked it down to FectchResponse.scala line 33
> > > > > >
> > > > > > The error happens when the FetchResponsePartitionData object's
> > > readFrom
> > > > > > method calls:
> > > > > > messageSetBuffer.limit(messageSetSize)
> > > > > >
> > > > > > I put in some debug code the the messageSetSize is 671758648,
> while
> > > the
> > > > > > buffer.capacity() gives 155733313, for some reason the buffer is
> > > > smaller
> > > > > > than the required message size.
> > > > > >
> > > > > > I don't know the consumer code enough to debug this. It doesn't
> > > matter
> > > > if
> > > > > > compression is used or not.
> > > > > >
> > > > > > I've created a jira ticket for this:
> > > > > > https://issues.apache.org/jira/browse/KAFKA-1196
> > > > > >
> > > > > > this is a real pain for me because I'm unable to consume from
> kafka
> > > at
> > > > > all
> > > > > > :(
> > > > > >
> > > > > >
> > > > > > Any ideas on possible config? or code changes I could try to fix?
> > > > > >
> > > > > > Regards,
> > > > > >  Gerrit
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Jun Rao <ju...@gmail.com>.
600mb for fetch size is considerably larger than the default size. Is there
a particular reason for this? Also, how many partitions do you have? You
may have to reduce the fetch size further if there are multiple partitions.

Thanks,

Jun


On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
gerritjvv@gmail.com> wrote:

> Hi,
>
> I just double checked my configuration and the broker has message.max.bytes
> set to 1 gig, the consumers have the same setting for max fetch size. I've
> lowered this to 600 mb and still see the same error :(,
>
> at the moment kafka is un-usable for me, the the only other alternative is
> writing my own client (as i'm doing with the producer), what a pain!
>
>
>
>
>
> On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <ju...@gmail.com> wrote:
>
> > In our wire protocol, we expect the first 4 bytes for a response to be
> its
> > size. If the actual size is larger than 2GB, what's stored in the those 4
> > bytes is the overflowed value. This could cause some of the buffer size
> to
> > be smaller than it should be later on. If #partitions * fetch_size  is
> > larger than 2GB in a single fetch request, you could hit this problem.
> You
> > can try reducing the fetch size. Ideally, the sender should catch this
> and
> > throw an exception, which we don't do currently.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> > gerritjvv@gmail.com> wrote:
> >
> > > Mm... Could be Im not sure if in a single request though. I am moving
> > allot
> > > of data. Any pointer at were in the code the overflow might start?
> > > On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
> > >
> > > > Are you fetching more than 2GB of data in a single fetch response
> > (across
> > > > all partitions)? Currently, we don't handle integer overflow
> properly.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > > gerritjvv@gmail.com> wrote:
> > > >
> > > > > While consuming from the topics I get an IlegalArgumentException
> and
> > > all
> > > > > consumption stops, the error keeps on throwing.
> > > > >
> > > > > I've tracked it down to FectchResponse.scala line 33
> > > > >
> > > > > The error happens when the FetchResponsePartitionData object's
> > readFrom
> > > > > method calls:
> > > > > messageSetBuffer.limit(messageSetSize)
> > > > >
> > > > > I put in some debug code the the messageSetSize is 671758648, while
> > the
> > > > > buffer.capacity() gives 155733313, for some reason the buffer is
> > > smaller
> > > > > than the required message size.
> > > > >
> > > > > I don't know the consumer code enough to debug this. It doesn't
> > matter
> > > if
> > > > > compression is used or not.
> > > > >
> > > > > I've created a jira ticket for this:
> > > > > https://issues.apache.org/jira/browse/KAFKA-1196
> > > > >
> > > > > this is a real pain for me because I'm unable to consume from kafka
> > at
> > > > all
> > > > > :(
> > > > >
> > > > >
> > > > > Any ideas on possible config? or code changes I could try to fix?
> > > > >
> > > > > Regards,
> > > > >  Gerrit
> > > > >
> > > >
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Gerrit Jansen van Vuuren <ge...@gmail.com>.
Hi,

I just double checked my configuration and the broker has message.max.bytes
set to 1 gig, the consumers have the same setting for max fetch size. I've
lowered this to 600 mb and still see the same error :(,

at the moment kafka is un-usable for me, the the only other alternative is
writing my own client (as i'm doing with the producer), what a pain!





On Wed, Jan 1, 2014 at 6:45 PM, Jun Rao <ju...@gmail.com> wrote:

> In our wire protocol, we expect the first 4 bytes for a response to be its
> size. If the actual size is larger than 2GB, what's stored in the those 4
> bytes is the overflowed value. This could cause some of the buffer size to
> be smaller than it should be later on. If #partitions * fetch_size  is
> larger than 2GB in a single fetch request, you could hit this problem. You
> can try reducing the fetch size. Ideally, the sender should catch this and
> throw an exception, which we don't do currently.
>
> Thanks,
>
> Jun
>
>
> On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> gerritjvv@gmail.com> wrote:
>
> > Mm... Could be Im not sure if in a single request though. I am moving
> allot
> > of data. Any pointer at were in the code the overflow might start?
> > On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
> >
> > > Are you fetching more than 2GB of data in a single fetch response
> (across
> > > all partitions)? Currently, we don't handle integer overflow properly.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > > gerritjvv@gmail.com> wrote:
> > >
> > > > While consuming from the topics I get an IlegalArgumentException and
> > all
> > > > consumption stops, the error keeps on throwing.
> > > >
> > > > I've tracked it down to FectchResponse.scala line 33
> > > >
> > > > The error happens when the FetchResponsePartitionData object's
> readFrom
> > > > method calls:
> > > > messageSetBuffer.limit(messageSetSize)
> > > >
> > > > I put in some debug code the the messageSetSize is 671758648, while
> the
> > > > buffer.capacity() gives 155733313, for some reason the buffer is
> > smaller
> > > > than the required message size.
> > > >
> > > > I don't know the consumer code enough to debug this. It doesn't
> matter
> > if
> > > > compression is used or not.
> > > >
> > > > I've created a jira ticket for this:
> > > > https://issues.apache.org/jira/browse/KAFKA-1196
> > > >
> > > > this is a real pain for me because I'm unable to consume from kafka
> at
> > > all
> > > > :(
> > > >
> > > >
> > > > Any ideas on possible config? or code changes I could try to fix?
> > > >
> > > > Regards,
> > > >  Gerrit
> > > >
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Jun Rao <ju...@gmail.com>.
In our wire protocol, we expect the first 4 bytes for a response to be its
size. If the actual size is larger than 2GB, what's stored in the those 4
bytes is the overflowed value. This could cause some of the buffer size to
be smaller than it should be later on. If #partitions * fetch_size  is
larger than 2GB in a single fetch request, you could hit this problem. You
can try reducing the fetch size. Ideally, the sender should catch this and
throw an exception, which we don't do currently.

Thanks,

Jun


On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
gerritjvv@gmail.com> wrote:

> Mm... Could be Im not sure if in a single request though. I am moving allot
> of data. Any pointer at were in the code the overflow might start?
> On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:
>
> > Are you fetching more than 2GB of data in a single fetch response (across
> > all partitions)? Currently, we don't handle integer overflow properly.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> > gerritjvv@gmail.com> wrote:
> >
> > > While consuming from the topics I get an IlegalArgumentException and
> all
> > > consumption stops, the error keeps on throwing.
> > >
> > > I've tracked it down to FectchResponse.scala line 33
> > >
> > > The error happens when the FetchResponsePartitionData object's readFrom
> > > method calls:
> > > messageSetBuffer.limit(messageSetSize)
> > >
> > > I put in some debug code the the messageSetSize is 671758648, while the
> > > buffer.capacity() gives 155733313, for some reason the buffer is
> smaller
> > > than the required message size.
> > >
> > > I don't know the consumer code enough to debug this. It doesn't matter
> if
> > > compression is used or not.
> > >
> > > I've created a jira ticket for this:
> > > https://issues.apache.org/jira/browse/KAFKA-1196
> > >
> > > this is a real pain for me because I'm unable to consume from kafka at
> > all
> > > :(
> > >
> > >
> > > Any ideas on possible config? or code changes I could try to fix?
> > >
> > > Regards,
> > >  Gerrit
> > >
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Gerrit Jansen van Vuuren <ge...@gmail.com>.
Mm... Could be Im not sure if in a single request though. I am moving allot
of data. Any pointer at were in the code the overflow might start?
On 1 Jan 2014 18:13, "Jun Rao" <ju...@gmail.com> wrote:

> Are you fetching more than 2GB of data in a single fetch response (across
> all partitions)? Currently, we don't handle integer overflow properly.
>
> Thanks,
>
> Jun
>
>
> On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> gerritjvv@gmail.com> wrote:
>
> > While consuming from the topics I get an IlegalArgumentException and all
> > consumption stops, the error keeps on throwing.
> >
> > I've tracked it down to FectchResponse.scala line 33
> >
> > The error happens when the FetchResponsePartitionData object's readFrom
> > method calls:
> > messageSetBuffer.limit(messageSetSize)
> >
> > I put in some debug code the the messageSetSize is 671758648, while the
> > buffer.capacity() gives 155733313, for some reason the buffer is smaller
> > than the required message size.
> >
> > I don't know the consumer code enough to debug this. It doesn't matter if
> > compression is used or not.
> >
> > I've created a jira ticket for this:
> > https://issues.apache.org/jira/browse/KAFKA-1196
> >
> > this is a real pain for me because I'm unable to consume from kafka at
> all
> > :(
> >
> >
> > Any ideas on possible config? or code changes I could try to fix?
> >
> > Regards,
> >  Gerrit
> >
>

Re: java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

Posted by Jun Rao <ju...@gmail.com>.
Are you fetching more than 2GB of data in a single fetch response (across
all partitions)? Currently, we don't handle integer overflow properly.

Thanks,

Jun


On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
gerritjvv@gmail.com> wrote:

> While consuming from the topics I get an IlegalArgumentException and all
> consumption stops, the error keeps on throwing.
>
> I've tracked it down to FectchResponse.scala line 33
>
> The error happens when the FetchResponsePartitionData object's readFrom
> method calls:
> messageSetBuffer.limit(messageSetSize)
>
> I put in some debug code the the messageSetSize is 671758648, while the
> buffer.capacity() gives 155733313, for some reason the buffer is smaller
> than the required message size.
>
> I don't know the consumer code enough to debug this. It doesn't matter if
> compression is used or not.
>
> I've created a jira ticket for this:
> https://issues.apache.org/jira/browse/KAFKA-1196
>
> this is a real pain for me because I'm unable to consume from kafka at all
> :(
>
>
> Any ideas on possible config? or code changes I could try to fix?
>
> Regards,
>  Gerrit
>