You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "lokesh Birla (JIRA)" <ji...@apache.org> on 2014/12/04 23:20:13 UTC
[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted
data to a consumer
[ https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234713#comment-14234713 ]
lokesh Birla commented on KAFKA-1806:
-------------------------------------
I have 3 node cluster kafka broker running one broker on each blade. I have one zookeeper running on another blade.
I created 4 partitions with replication factor 3 each and producer is sending messages from one blade and consumer is getting from another blade. I see above issue consistently.
However this issue did not exist when I use same configuaration with upto 3 topics. I increase the heap size from 4GB to 16GB however same issue.
> broker can still expose uncommitted data to a consumer
> ------------------------------------------------------
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
> Issue Type: Bug
> Components: consumer
> Affects Versions: 0.8.1.1
> Reporter: lokesh Birla
> Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to reproducer the issue consistently.
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch request for partition [mmetopic4,2] offset 1940029 from consumer with correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)