You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by darion <da...@gmail.com> on 2014/04/09 09:59:36 UTC

offset is out of range

Hi exports
Storm Topology reading data from kafka and kafka just a singal node
Topology Spout is so slow and I found this excption in the kafka's logging

[2014-04-09 14:58:53,729] ERROR error when processing request
FetchRequest(topic:topic.nginx, part:0 offset:948810259194
maxSize:1048576) (kafka.server.KafkaRequestHandlers)
kafka.common.OffsetOutOfRangeException: offset 948810259194 is out of range
at kafka.log.Log$.findRange(Log.scala:46)
at kafka.log.Log.read(Log.scala:264)
at
kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
at
kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:101)
at
kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:100)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
at
kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:100)
at
kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:40)
at
kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:40)
at kafka.network.Processor.handle(SocketServer.scala:296)
at kafka.network.Processor.read(SocketServer.scala:319)
at kafka.network.Processor.run(SocketServer.scala:214)
at java.lang.Thread.run(Thread.java:662)

It is meaning kafka's hold data too large ? or any other something?
It's a bug ?

thx a lot

Re: offset is out of range

Posted by Jun Rao <ju...@gmail.com>.
It could mean that the request offset no longer exists in the broker since
it's too old. If you look at the log segments in the broker, you can see
the valid offset range. The first valid offset is part of the name of the
oldest log segment.

Thanks,

Jun


On Wed, Apr 9, 2014 at 12:59 AM, darion <da...@gmail.com> wrote:

> Hi exports
> Storm Topology reading data from kafka and kafka just a singal node
> Topology Spout is so slow and I found this excption in the kafka's logging
>
> [2014-04-09 14:58:53,729] ERROR error when processing request
> FetchRequest(topic:topic.nginx, part:0 offset:948810259194
> maxSize:1048576) (kafka.server.KafkaRequestHandlers)
> kafka.common.OffsetOutOfRangeException: offset 948810259194 is out of range
> at kafka.log.Log$.findRange(Log.scala:46)
> at kafka.log.Log.read(Log.scala:264)
> at
>
> kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
> at
>
> kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:101)
> at
>
> kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:100)
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at
>
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
> at
>
> kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:100)
> at
>
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:40)
> at
>
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:40)
> at kafka.network.Processor.handle(SocketServer.scala:296)
> at kafka.network.Processor.read(SocketServer.scala:319)
> at kafka.network.Processor.run(SocketServer.scala:214)
> at java.lang.Thread.run(Thread.java:662)
>
> It is meaning kafka's hold data too large ? or any other something?
> It's a bug ?
>
> thx a lot
>