You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Sagar Rao (Jira)" <ji...@apache.org> on 2021/08/10 04:21:00 UTC
[jira] [Commented] (KAFKA-13152) Replace
"buffered.records.per.partition" with "input.buffer.max.bytes"
[ https://issues.apache.org/jira/browse/KAFKA-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396412#comment-17396412 ]
Sagar Rao commented on KAFKA-13152:
-----------------------------------
hey [~guozhang], i would like to take this up.. Will go through and post some notes here.
> Replace "buffered.records.per.partition" with "input.buffer.max.bytes"
> -----------------------------------------------------------------------
>
> Key: KAFKA-13152
> URL: https://issues.apache.org/jira/browse/KAFKA-13152
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Reporter: Guozhang Wang
> Assignee: Sagar Rao
> Priority: Major
> Labels: needs-kip
>
> The current config "buffered.records.per.partition" controls how many records in maximum to bookkeep, and hence it is exceed we would pause fetching from this partition. However this config has two issues:
> * It's a per-partition config, so the total memory consumed is dependent on the dynamic number of partitions assigned.
> * Record size could vary from case to case.
> And hence it's hard to bound the memory usage for this buffering. We should consider deprecating that config with a global, e.g. "input.buffer.max.bytes" which controls how much bytes in total is allowed to be buffered. This is doable since we buffer the raw records in <byte[], byte[]>.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)