You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jay Kreps (JIRA)" <ji...@apache.org> on 2015/01/23 20:08:34 UTC
[jira] [Created] (KAFKA-1895) Investigate moving deserialization
and decompression out of KafkaConsumer
Jay Kreps created KAFKA-1895:
--------------------------------
Summary: Investigate moving deserialization and decompression out of KafkaConsumer
Key: KAFKA-1895
URL: https://issues.apache.org/jira/browse/KAFKA-1895
Project: Kafka
Issue Type: Sub-task
Reporter: Jay Kreps
The consumer implementation in KAFKA-1760 decompresses fetch responses and deserializes them into ConsumerRecords which are then handed back as the result of poll().
There are several downsides to this:
1. It is impossible to scale serialization and decompression work beyond the single thread running the KafkaConsumer.
2. The results can come back during the processing of other calls such as commit() etc which can result in caching these records a little longer.
An alternative would be to have ConsumerRecords wrap the actual compressed serialized MemoryRecords chunks and do the deserialization during iteration. This way you could scale this over a thread pool if needed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)