You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2015/06/16 22:43:01 UTC

[jira] [Updated] (KAFKA-1895) Investigate moving deserialization and decompression out of KafkaConsumer

     [ https://issues.apache.org/jira/browse/KAFKA-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Guozhang Wang updated KAFKA-1895:
---------------------------------
    Fix Version/s: 0.9.0

> Investigate moving deserialization and decompression out of KafkaConsumer
> -------------------------------------------------------------------------
>
>                 Key: KAFKA-1895
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1895
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: consumer
>            Reporter: Jay Kreps
>             Fix For: 0.9.0
>
>
> The consumer implementation in KAFKA-1760 decompresses fetch responses and deserializes them into ConsumerRecords which are then handed back as the result of poll().
> There are several downsides to this:
> 1. It is impossible to scale serialization and decompression work beyond the single thread running the KafkaConsumer.
> 2. The results can come back during the processing of other calls such as commit() etc which can result in caching these records a little longer.
> An alternative would be to have ConsumerRecords wrap the actual compressed serialized MemoryRecords chunks and do the deserialization during iteration. This way you could scale this over a thread pool if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)