You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by doom <43...@qq.com> on 2020/10/21 08:20:21 UTC
when the single batch will larger than max.request.size?
When the single batch will larger than max.request.size?
I thought it happens when `conf[batch.size] > conf[max.request.size]`. How dose it relate to compression?
`size + first.estimatedSizeInBytes() > maxSize`
Here `size` is compressed; and `first.estimatedSizeInBytes()` is not compressed, because batch.close called after this line.
If `first` is the first batch in request, then `size` is zero, here only check the `first.estimatedSizeInBytes() > max.request.size`.
And the previous `producer.send()` has checked and ensure the `record size [not compressed] < max.request.size`.
Accumulator.drain():
https://github.com/apache/kafka/blob/962c624af9629d8e368f3dde8a9773d1f246dff7/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L590
private List<ProducerBatch> drainBatchesForOneNode
{
if (size + first.estimatedSizeInBytes() > maxSize && !ready.isEmpty()) {
// there is a rare case that a single batch size is larger than the request size due to
// compression; in this case we will still eventually send this batch in a single request
break;
}
producer.send():
https://github.com/apache/kafka/blob/962c624af9629d8e368f3dde8a9773d1f246dff7/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1056
private void ensureValidRecordSize(int size) {
if (size > maxRequestSize)
throw new RecordTooLargeException("The message is " + size +
" bytes when serialized which is larger than " + maxRequestSize + ", which is the value of the " +
ProducerConfig.MAX_REQUEST_SIZE_CONFIG + " configuration.");
}