You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Smriti Jha <sm...@agolo.com> on 2017/04/11 18:53:54 UTC
Kafka producer drops large messages
Hello all,
Can somebody shed light on kafka producer's behavior when the total size of
all messages in the buffer (bounded by queue.buffering.max.ms) exceeds the
socket buffer size (send.buffer.bytes)?
I'm using Kafka v0.8.2 with the old Producer API and have noticed that our
systems are dropping a few messages that are closer to 1MB in size. A few
messages that are only a few KBs in size and are attempted to be sent
around the same time as >1MB messages also get dropped. The official
documentation does talk about never dropping a "send" in case the buffer
has reached queue.buffering.max.messages but I don't think that applies to
size of the messages.
Thanks!
Re: Kafka producer drops large messages
Posted by Akhilesh Pathodia <pa...@gmail.com>.
Hi Smirit,
You will have to change some of broker configuration like message.max.bytes
to a larger value. The default value is 1 MB guess.
Please check below configs:
Broker Configuration
<https://www.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html#concept_gqw_rcz_yq__section_wsx_xcz_yq>
-
message.max.bytes
Maximum message size the broker will accept. Must be smaller than the
consumer fetch.message.max.bytes, or the consumer cannot consume the
message.
Default value: 1000000 (1 MB)
-
log.segment.bytes
Size of a Kafka data file. Must be larger than any single message.
Default value: 1073741824 (1 GiB)
-
replica.fetch.max.bytes
Maximum message size a broker can replicate. Must be larger than
message.max.bytes, or a broker can accept messages it cannot replicate,
potentially resulting in data loss.
Default value: 1048576 (1 MiB)
Thanks,
Akhilesh
On Wed, Apr 12, 2017 at 12:23 AM, Smriti Jha <sm...@agolo.com> wrote:
> Hello all,
>
> Can somebody shed light on kafka producer's behavior when the total size of
> all messages in the buffer (bounded by queue.buffering.max.ms) exceeds the
> socket buffer size (send.buffer.bytes)?
>
> I'm using Kafka v0.8.2 with the old Producer API and have noticed that our
> systems are dropping a few messages that are closer to 1MB in size. A few
> messages that are only a few KBs in size and are attempted to be sent
> around the same time as >1MB messages also get dropped. The official
> documentation does talk about never dropping a "send" in case the buffer
> has reached queue.buffering.max.messages but I don't think that applies to
> size of the messages.
>
> Thanks!
>