You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "huxi (JIRA)" <ji...@apache.org> on 2017/02/15 09:18:42 UTC

[jira] [Comment Edited] (KAFKA-4762) Consumer throwing RecordTooLargeException even when messages are not that large

    [ https://issues.apache.org/jira/browse/KAFKA-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15864954#comment-15864954 ] 

huxi edited comment on KAFKA-4762 at 2/15/17 9:17 AM:
------------------------------------------------------

Logs show that you are using 0.10.x(or before) where max.partition.fetch.bytes is a hard limit even when you enable the compression. In your case, seems that you have enabled the compression on the producer side. `max.partition.fetch.bytes` also applies to the whole compressed message which is often much larger than a single one. That's why you run into RecordTooLargeException.

0.10.1 which completes [KIP-74|https://cwiki.apache.org/confluence/display/KAFKA/KIP-74:+Add+Fetch+Response+Size+Limit+in+Bytes] already 'fixes' your problem by making  `max.partition.fetch.bytes` field in the fetch request much less useful, so you can try with an 0.10.1 build.



was (Author: huxi_2b):
Logs show that you are using 0.10.x where max.partition.fetch.bytes is a hard limit even when you enable the compression. In your case, seems that you have enabled the compression on the producer side. `max.partition.fetch.bytes` also applies to the whole compressed message which is often much larger than a single one. That's why you run into RecordTooLargeException.

0.10.1 which completes [KIP-74|https://cwiki.apache.org/confluence/display/KAFKA/KIP-74:+Add+Fetch+Response+Size+Limit+in+Bytes] already 'fixes' your problem by making  `max.partition.fetch.bytes` field in the fetch request much less useful, so you can try with an 0.10.1 build.


> Consumer throwing RecordTooLargeException even when messages are not that large
> -------------------------------------------------------------------------------
>
>                 Key: KAFKA-4762
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4762
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.9.0.1
>            Reporter: Vipul Singh
>
> We were just recently hit by a weird error. 
> Before going in any further, explaining of our service setup. we have a producer which produces messages not larger than 256 kb of messages( we have an explicit check about this on the producer side) and on the client side we have a fetch limit of 512kb(max.partition.fetch.bytes is set to 524288 bytes) 
> Recently our client started to see this error:
> {quote}
> org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {topic_name-0=9925056036} whose size is larger than the fetch size 524288 and hence cannot be ever returned. Increase the fetch size, or decrease the maximum message size the broker will allow.
> {quote}
> We tried consuming messages with another consumer, without any max.partition.fetch.bytes limit, and it consumed fine. The messages were small, and did not seem to be greater than 256 kb
> We took a log dump, and the log size looked fine.
> {quote}
> mpresscodec: NoCompressionCodec crc: 2473548911 keysize: 8
> offset: 9925056032 position: 191380053 isvalid: true payloadsize: 539 magic: 0 compresscodec: NoCompressionCodec crc: 1656420267 keysize: 8
> offset: 9925056033 position: 191380053 isvalid: true payloadsize: 1551 magic: 0 compresscodec: NoCompressionCodec crc: 2398479758 keysize: 8
> offset: 9925056034 position: 191380053 isvalid: true payloadsize: 1307 magic: 0 compresscodec: NoCompressionCodec crc: 2845554215 keysize: 8
> offset: 9925056035 position: 191380053 isvalid: true payloadsize: 1520 magic: 0 compresscodec: NoCompressionCodec crc: 3106984195 keysize: 8
> offset: 9925056036 position: 191713371 isvalid: true payloadsize: 1207 magic: 0 compresscodec: NoCompressionCodec crc: 3462154435 keysize: 8
> offset: 9925056037 position: 191713371 isvalid: true payloadsize: 418 magic: 0 compresscodec: NoCompressionCodec crc: 1536701802 keysize: 8
> offset: 9925056038 position: 191713371 isvalid: true payloadsize: 299 magic: 0 compresscodec: NoCompressionCodec crc: 4112567543 keysize: 8
> offset: 9925056039 position: 191713371 isvalid: true payloadsize: 1571 magic: 0 compresscodec: NoCompressionCodec crc: 3696994307 keysize: 8
> {quote}
> Has anyone seen something similar? or any points to troubleshoot this further
> Please Note: To overcome this issue, we deployed a new consumer, without this limit of max.partition.fetch.bytes, and it worked fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)