You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Milind Vaidya <ka...@gmail.com> on 2017/05/23 19:09:45 UTC

Question regarding buffer.memory, max.request.size and send.buffer.bytes

I have set the producer properties as follows (0.10.0.0)

*"linger.ms <http://linger.ms>"** : **"500"** ,*

 *"batch.size"** : **"1000"**,*

*"buffer.memory"** :**"**10000**"**,*

 *"send.buffer.bytes"** : **"512000"*

*and default *

* max.request.size = *1048576


 If records are sent faster than they can be delivered, they will be
buffered. Now with buffer.memory having *10000 *bytes value, if a record has
 more size than this what will happen ? say 11629 bytes in size. What is
the minimum value of buffer.memory in terms of other params ? Should it be
atleast equal to *send.buffer.bytes or **max.request.size or* better left
to default which is 33554432 ?

I am trying to debug some events not reaching consumer, so wondering if
this could be the reason.

Re: Question regarding buffer.memory, max.request.size and send.buffer.bytes

Posted by Mohammed Manna <ma...@gmail.com>.
As the document suggest, you should keep it same as your max.request.size.
And the documentation clearly says that the system will throw exception if
you are producing faster than you can send, and kept blocking until
max.block.ms is reached.



On 24 May 2017 at 00:51, Milind Vaidya <ka...@gmail.com> wrote:

> I am looking for Producer tuning as mentioned in the mail, all the
> properties are related to producer config.
>
> This is where the property is mentioned :
> https://kafka.apache.org/0100/documentation.html#producerconfigs
>
> Consumer in this case if KafkaSpout from Apache-Storm.
>
>
>
>
>
> On Tue, May 23, 2017 at 3:28 PM, Mohammed Manna <ma...@gmail.com>
> wrote:
>
> >  This could be for various reasons:
> >
> > 1) Your consumer.property settings - if you have not been acknowledging
> > automatically, you need to provide a sufficient polling time and commit
> in
> > sync/async.
> > 2) You are not consuming the messages how you think.
> >
> > I don't know how you got this buffer.memory property. Doesn't sound
> right,
> > could you kindly check this again? Also, could you please provide a
> snippet
> > of your Consumer and how you are reading from the stream?
> >
> > By default, the buffer is about 10% of the message.max.bytes. Perhaps you
> > are looking for a Producer tuning by using the following:
> >
> > batch.size
> > message.max.bytes
> > send.buffer.bytes
> > Cloudtera and Confluent.io have some nice articles on Kafka. Have a read
> > through this
> > https://www.cloudera.com/documentation/kafka/latest/
> > topics/kafka_performance.html
> >
> >
> >
> > On 23 May 2017 at 20:09, Milind Vaidya <ka...@gmail.com> wrote:
> >
> > > I have set the producer properties as follows (0.10.0.0)
> > >
> > > *"linger.ms <http://linger.ms>"** : **"500"** ,*
> > >
> > >  *"batch.size"** : **"1000"**,*
> > >
> > > *"buffer.memory"** :**"**10000**"**,*
> > >
> > >  *"send.buffer.bytes"** : **"512000"*
> > >
> > > *and default *
> > >
> > > * max.request.size = *1048576
> > >
> > >
> > >  If records are sent faster than they can be delivered, they will be
> > > buffered. Now with buffer.memory having *10000 *bytes value, if a
> record
> > > has
> > >  more size than this what will happen ? say 11629 bytes in size. What
> is
> > > the minimum value of buffer.memory in terms of other params ? Should it
> > be
> > > atleast equal to *send.buffer.bytes or **max.request.size or* better
> left
> > > to default which is 33554432 ?
> > >
> > > I am trying to debug some events not reaching consumer, so wondering if
> > > this could be the reason.
> > >
> >
>

Re: Question regarding buffer.memory, max.request.size and send.buffer.bytes

Posted by Milind Vaidya <ka...@gmail.com>.
I am looking for Producer tuning as mentioned in the mail, all the
properties are related to producer config.

This is where the property is mentioned :
https://kafka.apache.org/0100/documentation.html#producerconfigs

Consumer in this case if KafkaSpout from Apache-Storm.





On Tue, May 23, 2017 at 3:28 PM, Mohammed Manna <ma...@gmail.com> wrote:

>  This could be for various reasons:
>
> 1) Your consumer.property settings - if you have not been acknowledging
> automatically, you need to provide a sufficient polling time and commit in
> sync/async.
> 2) You are not consuming the messages how you think.
>
> I don't know how you got this buffer.memory property. Doesn't sound right,
> could you kindly check this again? Also, could you please provide a snippet
> of your Consumer and how you are reading from the stream?
>
> By default, the buffer is about 10% of the message.max.bytes. Perhaps you
> are looking for a Producer tuning by using the following:
>
> batch.size
> message.max.bytes
> send.buffer.bytes
> Cloudtera and Confluent.io have some nice articles on Kafka. Have a read
> through this
> https://www.cloudera.com/documentation/kafka/latest/
> topics/kafka_performance.html
>
>
>
> On 23 May 2017 at 20:09, Milind Vaidya <ka...@gmail.com> wrote:
>
> > I have set the producer properties as follows (0.10.0.0)
> >
> > *"linger.ms <http://linger.ms>"** : **"500"** ,*
> >
> >  *"batch.size"** : **"1000"**,*
> >
> > *"buffer.memory"** :**"**10000**"**,*
> >
> >  *"send.buffer.bytes"** : **"512000"*
> >
> > *and default *
> >
> > * max.request.size = *1048576
> >
> >
> >  If records are sent faster than they can be delivered, they will be
> > buffered. Now with buffer.memory having *10000 *bytes value, if a record
> > has
> >  more size than this what will happen ? say 11629 bytes in size. What is
> > the minimum value of buffer.memory in terms of other params ? Should it
> be
> > atleast equal to *send.buffer.bytes or **max.request.size or* better left
> > to default which is 33554432 ?
> >
> > I am trying to debug some events not reaching consumer, so wondering if
> > this could be the reason.
> >
>

Re: Question regarding buffer.memory, max.request.size and send.buffer.bytes

Posted by Mohammed Manna <ma...@gmail.com>.
 This could be for various reasons:

1) Your consumer.property settings - if you have not been acknowledging
automatically, you need to provide a sufficient polling time and commit in
sync/async.
2) You are not consuming the messages how you think.

I don't know how you got this buffer.memory property. Doesn't sound right,
could you kindly check this again? Also, could you please provide a snippet
of your Consumer and how you are reading from the stream?

By default, the buffer is about 10% of the message.max.bytes. Perhaps you
are looking for a Producer tuning by using the following:

batch.size
message.max.bytes
send.buffer.bytes
Cloudtera and Confluent.io have some nice articles on Kafka. Have a read
through this
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html



On 23 May 2017 at 20:09, Milind Vaidya <ka...@gmail.com> wrote:

> I have set the producer properties as follows (0.10.0.0)
>
> *"linger.ms <http://linger.ms>"** : **"500"** ,*
>
>  *"batch.size"** : **"1000"**,*
>
> *"buffer.memory"** :**"**10000**"**,*
>
>  *"send.buffer.bytes"** : **"512000"*
>
> *and default *
>
> * max.request.size = *1048576
>
>
>  If records are sent faster than they can be delivered, they will be
> buffered. Now with buffer.memory having *10000 *bytes value, if a record
> has
>  more size than this what will happen ? say 11629 bytes in size. What is
> the minimum value of buffer.memory in terms of other params ? Should it be
> atleast equal to *send.buffer.bytes or **max.request.size or* better left
> to default which is 33554432 ?
>
> I am trying to debug some events not reaching consumer, so wondering if
> this could be the reason.
>