You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Dominik Safaric <do...@gmail.com> on 2016/10/29 12:57:15 UTC

Increasing producer throughput

Dear all,

As my team is in the process of benchmarking several stream processing engines consuming data from Kafka, I’ve been investigating onto boosting the Kafka producer throughput. 

For running Kafka we use a single node with a single broker configuration. Kafka heap size is set to 4GB. All messages produced are of the same byte size, being equal to 8 bytes. So far, I’ve experimented with various configurational variances, including but not limited to batch size, max request size, number of acks. In addition, we’ve even created a shared pool of thread to run multiple instances of the producer, but none of the results yielded significant improvements. 

Currently, the throughput we manage to achieve considering the instance configuration (16CPUs, 64GB of RAM, a few TB of disk), is in average 700.000 messages per second.

Generally we do not care about message delivery semantics, but we do care about not influencing latency from the consumers end point. 

Any advice from your personal experience? 

Thanks in advance!

Dominik