You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Apurva Mehta (JIRA)" <ji...@apache.org> on 2017/04/12 23:37:41 UTC
[jira] [Created] (KAFKA-5062) Kafka brokers can accept malformed
requests which allocate gigabytes of memory
Apurva Mehta created KAFKA-5062:
-----------------------------------
Summary: Kafka brokers can accept malformed requests which allocate gigabytes of memory
Key: KAFKA-5062
URL: https://issues.apache.org/jira/browse/KAFKA-5062
Project: Kafka
Issue Type: Bug
Reporter: Apurva Mehta
In some circumstances, it is possible to cause a Kafka broker to allocate massive amounts of memory by writing malformed bytes to the brokers port.
In investigating an issue, we saw byte arrays on the kafka heap upto 1.8 gigabytes, the first 360 bytes of which were non kafka requests -- an application was writing the wrong data to kafka, causing the broker to interpret the request size as 1.8GB and then allocate that amount. Apart from the first 360 bytes, the rest of the 1.8GB byte array was null.
We have a socket.request.max.bytes set at 100MB to protect against this kind of thing, but somehow that limit is not always respected. We need to investigate why and fix it.
cc [~rnpridgeon], [~ijuma], [~gwenshap], [~cmccabe]
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)