You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jun Rao (JIRA)" <ji...@apache.org> on 2013/10/08 17:23:43 UTC

[jira] [Commented] (KAFKA-1077) OutOfMemoryError when consume large messaages

    [ https://issues.apache.org/jira/browse/KAFKA-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789290#comment-13789290 ] 

Jun Rao commented on KAFKA-1077:
--------------------------------

Is this in 0.7 or 0.8? I assume the error is in the consumer side. It seems that you are fetching many topic/partitions, whose total response size is 850M. You may need to increase jvm heap size or using a smaller fetch size.

> OutOfMemoryError when consume large messaages
> ---------------------------------------------
>
>                 Key: KAFKA-1077
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1077
>             Project: Kafka
>          Issue Type: Bug
>          Components: config, network
>            Reporter: Xiejing
>            Assignee: Jun Rao
>
> We set 'socket.request.max.bytes'  to 100 * 1024 * 1024, but still see OutOfMemoryError when consuming messages(size 1M).
> e.g.  
> [08/10/13 05:44:47:047 AM EDT] 102 ERROR network.BoundedByteBufferReceive: OOME with size 858861616
> java.lang.OutOfMemoryError: Java heap space 
> 858861616 is much larger than 100 * 1024 * 1024 but no InvalidRequestException is thrown in BoundedByteBufferReceive



--
This message was sent by Atlassian JIRA
(v6.1#6144)