You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jason Gustafson (JIRA)" <ji...@apache.org> on 2018/01/24 19:59:00 UTC
[jira] [Created] (KAFKA-6480) Add config to enforce max fetch size
on the broker
Jason Gustafson created KAFKA-6480:
--------------------------------------
Summary: Add config to enforce max fetch size on the broker
Key: KAFKA-6480
URL: https://issues.apache.org/jira/browse/KAFKA-6480
Project: Kafka
Issue Type: Bug
Reporter: Jason Gustafson
Users are increasingly hitting memory problems due to message format down-conversion. The problem is basically that we have to do the down-conversion in memory. Since the default fetch size is 50Mb, it doesn't take that many fetch requests to cause an OOM. One mitigation is KAFKA-6352. It would also be helpful if the broker had a configuration to restrict the maximum allowed fetch size across all consumers. This would also prevent a malicious client from using this in order to DoS the server.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)