You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Eduardo Costa Alfaia <e....@unibs.it> on 2015/01/19 17:52:11 UTC

Issue size message

Hi All,
I am having an issue when using kafka with librdkafka. I've changed the message.max.bytes to 2MB in my server.properties config file, that is the size of my message, when I run the command line ./rdkafka_performance -C -t test -p 0 -b computer49:9092, after consume some messages the consumer remain waiting something that don't arrive. My producer continues sending messages. Some idea?

% Using random seed 1421685059, verbosity level 1
% 214 messages and 1042835 bytes consumed in 20ms: 10518 msgs/s and 51.26 Mb/s, no compression
% 21788 messages and 106128192 bytes consumed in 1029ms: 21154 msgs/s and 103.04 Mb/s, no compression
% 43151 messages and 210185259 bytes consumed in 2030ms: 21252 msgs/s and 103.52 Mb/s, no compression
% 64512 messages and 314233575 bytes consumed in 3031ms: 21280 msgs/s and 103.66 Mb/s, no compression
% 86088 messages and 419328692 bytes consumed in 4039ms: 21313 msgs/s and 103.82 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 5719ms: 17571 msgs/s and 85.67 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 6720ms: 14955 msgs/s and 72.92 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 7720ms: 13018 msgs/s and 63.47 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 8720ms: 11524 msgs/s and 56.19 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 9720ms: 10339 msgs/s and 50.41 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 10721ms: 9374 msgs/s and 45.71 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 11721ms: 8574 msgs/s and 41.81 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 12721ms: 7900 msgs/s and 38.52 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 13721ms: 7324 msgs/s and 35.71 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 14721ms: 6826 msgs/s and 33.29 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 15722ms: 6392 msgs/s and 31.17 Mb/s, no compression
% 100504 messages and 490022646 bytes consumed in 16722ms: 6010 msgs/s and 29.30 Mb/s, no 
........


The software when consume all offset send me the message:

% Consumer reached end of unibs.nec [0] message queue at offset 229790
RD_KAFKA_RESP_ERR__PARTITION_EOF: [-191]

However changed de message.max.bytes to 2MB I don’t receive the code from Kafka.

Anyone has some idea?

Thanks guys.
-- 
Informativa sulla Privacy: http://www.unibs.it/node/8155

Re: Issue size message

Posted by Eduardo Costa Alfaia <e....@unibs.it>.
Hi guys,

Ok, I’ve proved this and it was fine.

Thanks

> On Jan 19, 2015, at 19:10, Joe Stein <jo...@stealth.ly> wrote:
> 
> If you increase the size of the messages for producing then you **MUST** also
> change *replica.fetch.max.bytes i*n the broker* server.properties *otherwise
> none of your replicas will be able to fetch from the leader and they will
> all fall out of the ISR. You also then need to change your consumers
> *fetch.message.max.bytes* in your consumers properties (whoever that might
> be configured for your specific consumer being used) so that they can read
> that data otherwise you won't see messages downstream.
> 
> /*******************************************
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ********************************************/
> 
> On Mon, Jan 19, 2015 at 1:03 PM, Magnus Edenhill <ma...@edenhill.se> wrote:
> 
>> (duplicating the github answer for reference)
>> 
>> Hi Eduardo,
>> 
>> the default maximum fetch size is 1 Meg which means your 2 Meg messages
>> will not fit the fetch request.
>> Try increasing it by appending -X fetch.message.max.bytes=4000000 to your
>> command line.
>> 
>> Regards,
>> Magnus
>> 
>> 
>> 2015-01-19 17:52 GMT+01:00 Eduardo Costa Alfaia <e....@unibs.it>:
>> 
>>> Hi All,
>>> I am having an issue when using kafka with librdkafka. I've changed the
>>> message.max.bytes to 2MB in my server.properties config file, that is the
>>> size of my message, when I run the command line ./rdkafka_performance -C
>> -t
>>> test -p 0 -b computer49:9092, after consume some messages the consumer
>>> remain waiting something that don't arrive. My producer continues sending
>>> messages. Some idea?
>>> 
>>> % Using random seed 1421685059, verbosity level 1
>>> % 214 messages and 1042835 bytes consumed in 20ms: 10518 msgs/s and 51.26
>>> Mb/s, no compression
>>> % 21788 messages and 106128192 bytes consumed in 1029ms: 21154 msgs/s and
>>> 103.04 Mb/s, no compression
>>> % 43151 messages and 210185259 bytes consumed in 2030ms: 21252 msgs/s and
>>> 103.52 Mb/s, no compression
>>> % 64512 messages and 314233575 bytes consumed in 3031ms: 21280 msgs/s and
>>> 103.66 Mb/s, no compression
>>> % 86088 messages and 419328692 bytes consumed in 4039ms: 21313 msgs/s and
>>> 103.82 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 5719ms: 17571 msgs/s
>> and
>>> 85.67 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 6720ms: 14955 msgs/s
>> and
>>> 72.92 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 7720ms: 13018 msgs/s
>> and
>>> 63.47 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 8720ms: 11524 msgs/s
>> and
>>> 56.19 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 9720ms: 10339 msgs/s
>> and
>>> 50.41 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 10721ms: 9374 msgs/s
>> and
>>> 45.71 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 11721ms: 8574 msgs/s
>> and
>>> 41.81 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 12721ms: 7900 msgs/s
>> and
>>> 38.52 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 13721ms: 7324 msgs/s
>> and
>>> 35.71 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 14721ms: 6826 msgs/s
>> and
>>> 33.29 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 15722ms: 6392 msgs/s
>> and
>>> 31.17 Mb/s, no compression
>>> % 100504 messages and 490022646 bytes consumed in 16722ms: 6010 msgs/s
>> and
>>> 29.30 Mb/s, no
>>> ........
>>> 
>>> 
>>> The software when consume all offset send me the message:
>>> 
>>> % Consumer reached end of unibs.nec [0] message queue at offset 229790
>>> RD_KAFKA_RESP_ERR__PARTITION_EOF: [-191]
>>> 
>>> However changed de message.max.bytes to 2MB I don’t receive the code from
>>> Kafka.
>>> 
>>> Anyone has some idea?
>>> 
>>> Thanks guys.
>>> --
>>> Informativa sulla Privacy: http://www.unibs.it/node/8155
>>> 
>> 


-- 
Informativa sulla Privacy: http://www.unibs.it/node/8155

Re: Issue size message

Posted by Joe Stein <jo...@stealth.ly>.
If you increase the size of the messages for producing then you **MUST** also
change *replica.fetch.max.bytes i*n the broker* server.properties *otherwise
none of your replicas will be able to fetch from the leader and they will
all fall out of the ISR. You also then need to change your consumers
*fetch.message.max.bytes* in your consumers properties (whoever that might
be configured for your specific consumer being used) so that they can read
that data otherwise you won't see messages downstream.

/*******************************************
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
********************************************/

On Mon, Jan 19, 2015 at 1:03 PM, Magnus Edenhill <ma...@edenhill.se> wrote:

> (duplicating the github answer for reference)
>
> Hi Eduardo,
>
> the default maximum fetch size is 1 Meg which means your 2 Meg messages
> will not fit the fetch request.
> Try increasing it by appending -X fetch.message.max.bytes=4000000 to your
> command line.
>
> Regards,
> Magnus
>
>
> 2015-01-19 17:52 GMT+01:00 Eduardo Costa Alfaia <e....@unibs.it>:
>
> > Hi All,
> > I am having an issue when using kafka with librdkafka. I've changed the
> > message.max.bytes to 2MB in my server.properties config file, that is the
> > size of my message, when I run the command line ./rdkafka_performance -C
> -t
> > test -p 0 -b computer49:9092, after consume some messages the consumer
> > remain waiting something that don't arrive. My producer continues sending
> > messages. Some idea?
> >
> > % Using random seed 1421685059, verbosity level 1
> > % 214 messages and 1042835 bytes consumed in 20ms: 10518 msgs/s and 51.26
> > Mb/s, no compression
> > % 21788 messages and 106128192 bytes consumed in 1029ms: 21154 msgs/s and
> > 103.04 Mb/s, no compression
> > % 43151 messages and 210185259 bytes consumed in 2030ms: 21252 msgs/s and
> > 103.52 Mb/s, no compression
> > % 64512 messages and 314233575 bytes consumed in 3031ms: 21280 msgs/s and
> > 103.66 Mb/s, no compression
> > % 86088 messages and 419328692 bytes consumed in 4039ms: 21313 msgs/s and
> > 103.82 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 5719ms: 17571 msgs/s
> and
> > 85.67 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 6720ms: 14955 msgs/s
> and
> > 72.92 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 7720ms: 13018 msgs/s
> and
> > 63.47 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 8720ms: 11524 msgs/s
> and
> > 56.19 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 9720ms: 10339 msgs/s
> and
> > 50.41 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 10721ms: 9374 msgs/s
> and
> > 45.71 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 11721ms: 8574 msgs/s
> and
> > 41.81 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 12721ms: 7900 msgs/s
> and
> > 38.52 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 13721ms: 7324 msgs/s
> and
> > 35.71 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 14721ms: 6826 msgs/s
> and
> > 33.29 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 15722ms: 6392 msgs/s
> and
> > 31.17 Mb/s, no compression
> > % 100504 messages and 490022646 bytes consumed in 16722ms: 6010 msgs/s
> and
> > 29.30 Mb/s, no
> > ........
> >
> >
> > The software when consume all offset send me the message:
> >
> > % Consumer reached end of unibs.nec [0] message queue at offset 229790
> > RD_KAFKA_RESP_ERR__PARTITION_EOF: [-191]
> >
> > However changed de message.max.bytes to 2MB I don’t receive the code from
> > Kafka.
> >
> > Anyone has some idea?
> >
> > Thanks guys.
> > --
> > Informativa sulla Privacy: http://www.unibs.it/node/8155
> >
>

Re: Issue size message

Posted by Magnus Edenhill <ma...@edenhill.se>.
(duplicating the github answer for reference)

Hi Eduardo,

the default maximum fetch size is 1 Meg which means your 2 Meg messages
will not fit the fetch request.
Try increasing it by appending -X fetch.message.max.bytes=4000000 to your
command line.

Regards,
Magnus


2015-01-19 17:52 GMT+01:00 Eduardo Costa Alfaia <e....@unibs.it>:

> Hi All,
> I am having an issue when using kafka with librdkafka. I've changed the
> message.max.bytes to 2MB in my server.properties config file, that is the
> size of my message, when I run the command line ./rdkafka_performance -C -t
> test -p 0 -b computer49:9092, after consume some messages the consumer
> remain waiting something that don't arrive. My producer continues sending
> messages. Some idea?
>
> % Using random seed 1421685059, verbosity level 1
> % 214 messages and 1042835 bytes consumed in 20ms: 10518 msgs/s and 51.26
> Mb/s, no compression
> % 21788 messages and 106128192 bytes consumed in 1029ms: 21154 msgs/s and
> 103.04 Mb/s, no compression
> % 43151 messages and 210185259 bytes consumed in 2030ms: 21252 msgs/s and
> 103.52 Mb/s, no compression
> % 64512 messages and 314233575 bytes consumed in 3031ms: 21280 msgs/s and
> 103.66 Mb/s, no compression
> % 86088 messages and 419328692 bytes consumed in 4039ms: 21313 msgs/s and
> 103.82 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 5719ms: 17571 msgs/s and
> 85.67 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 6720ms: 14955 msgs/s and
> 72.92 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 7720ms: 13018 msgs/s and
> 63.47 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 8720ms: 11524 msgs/s and
> 56.19 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 9720ms: 10339 msgs/s and
> 50.41 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 10721ms: 9374 msgs/s and
> 45.71 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 11721ms: 8574 msgs/s and
> 41.81 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 12721ms: 7900 msgs/s and
> 38.52 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 13721ms: 7324 msgs/s and
> 35.71 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 14721ms: 6826 msgs/s and
> 33.29 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 15722ms: 6392 msgs/s and
> 31.17 Mb/s, no compression
> % 100504 messages and 490022646 bytes consumed in 16722ms: 6010 msgs/s and
> 29.30 Mb/s, no
> ........
>
>
> The software when consume all offset send me the message:
>
> % Consumer reached end of unibs.nec [0] message queue at offset 229790
> RD_KAFKA_RESP_ERR__PARTITION_EOF: [-191]
>
> However changed de message.max.bytes to 2MB I don’t receive the code from
> Kafka.
>
> Anyone has some idea?
>
> Thanks guys.
> --
> Informativa sulla Privacy: http://www.unibs.it/node/8155
>