You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Jason Gustafson (Jira)" <ji...@apache.org> on 2021/02/20 18:45:00 UTC

[jira] [Created] (KAFKA-12351) Fix misleading max.request.size behavior

Jason Gustafson created KAFKA-12351:
---------------------------------------

             Summary: Fix misleading max.request.size behavior
                 Key: KAFKA-12351
                 URL: https://issues.apache.org/jira/browse/KAFKA-12351
             Project: Kafka
          Issue Type: Improvement
            Reporter: Jason Gustafson
            Assignee: Jason Gustafson


The producer has a configuration called `max.request.size`. It is documented as follows:
{code}
        "The maximum size of a request in bytes. This setting will limit the number of record " +
        "batches the producer will send in a single request to avoid sending huge requests. " +
        "This is also effectively a cap on the maximum uncompressed record batch size. Note that the server " +
        "has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.";
{code}
So basically the intent is to limit the overall size of the request, but the documentation says that it is also serves as a maximum cap on the uncompressed batch size.

In the implementation, however, we use it as a maximum cap on uncompressed record sizes, not batches. Additionally, we treat this as a soft limit when applied to requests. Both of these differences are worth pointing out in the documentation. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)