You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Guozhang Wang (Jira)" <ji...@apache.org> on 2021/07/30 18:56:00 UTC

[jira] [Created] (KAFKA-13152) Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

Guozhang Wang created KAFKA-13152:
-------------------------------------

             Summary: Replace "buffered.records.per.partition" with "input.buffer.max.bytes" 
                 Key: KAFKA-13152
                 URL: https://issues.apache.org/jira/browse/KAFKA-13152
             Project: Kafka
          Issue Type: Improvement
          Components: streams
            Reporter: Guozhang Wang


The current config "buffered.records.per.partition" controls how many records in maximum to bookkeep, and hence it is exceed we would pause fetching from this partition. However this config has two issues:

* It's a per-partition config, so the total memory consumed is dependent on the dynamic number of partitions assigned.
* Record size could vary from case to case.

And hence it's hard to bound the memory usage for this buffering. We should consider deprecating that config with a global, e.g. "input.buffer.max.bytes" which controls how much bytes in total is allowed to be buffered. This is doable since we buffer the raw records in <byte[], byte[]>.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)