You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Divij Vaidya (Jira)" <ji...@apache.org> on 2023/02/07 09:51:00 UTC

[jira] [Commented] (KAFKA-14631) Compression optimization: do not read the key/value for last record in the batch

    [ https://issues.apache.org/jira/browse/KAFKA-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17685186#comment-17685186 ] 

Divij Vaidya commented on KAFKA-14631:
--------------------------------------

Today, in the code, we validate that not uncompressed data is left unread by reading all the way till the end of the batch. In case, we have more uncompressed data than anticipated, we throw an exception. If we make this change, we will not be able to perform this validation.

Code ref: [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java#L426-L428] 

> Compression optimization: do not read the key/value for last record in the batch
> --------------------------------------------------------------------------------
>
>                 Key: KAFKA-14631
>                 URL: https://issues.apache.org/jira/browse/KAFKA-14631
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Divij Vaidya
>            Assignee: Divij Vaidya
>            Priority: Major
>             Fix For: 3.5.0
>
>
> Do not read the end of the batch since it contains the key/value for last record. Instead of “skipping” which would lead to decompression, we can simply not read it at all.
> Only applicable for skipIterator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)