You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by Guy Spielberg <gu...@alooma.io> on 2015/11/23 06:47:55 UTC

Re: Why am I getting OffsetOutOfRange: Updating offset from offset?

Hi Sachin,

I guess these "few days of topology run" will be a week?
Your Kafka spout is probably lagging behind and messages located at the
offset requested were deleted from the Kafka topic (log.retention.hours),
causing the following fetch request to throw `OffsetOutOfRange`.

Thanks,
Guy


On Mon, Nov 23, 2015 at 7:25 AM, Sachin Pasalkar <
Sachin_Pasalkar@symantec.com> wrote:

> Can someone help us on this?
>
> From: Sachin Pasalkar <sachin_pasalkar@symantec.com<mailto:
> sachin_pasalkar@symantec.com>>
> Reply-To: "dev@storm.apache.org<ma...@storm.apache.org>" <
> dev@storm.apache.org<ma...@storm.apache.org>>
> Date: Friday, 20 November 2015 11:53 am
> To: "dev@storm.apache.org<ma...@storm.apache.org>" <
> dev@storm.apache.org<ma...@storm.apache.org>>
> Subject: Why am I getting OffsetOutOfRange: Updating offset from offset?
>
> Hi,
>
> We are developing application where after some days of topology run, we
> get continuous warning messages
>
>
> 2015-11-20 05:05:42.226 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7238824446]
>
> 2015-11-20 05:05:42.229 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7238824446 to offset = 7241183683
>
> 2015-11-20 05:05:43.207 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7022945051]
>
> 2015-11-20 05:05:43.208 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7022945051 to offset = 7025309343
>
> 2015-11-20 05:05:44.260 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7170559432]
>
> 2015-11-20 05:05:44.264 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7170559432 to offset = 7172920769
>
> 2015-11-20 05:05:45.332 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7132495867]……
>
>
> After some point topology stop processing messages, I need to rebalance it
> to start it again.
>
>
> My spout config is
>
>
> BrokerHosts brokers = new ZkHosts((String)
> stormConfiguration.get(ZOOKEEPER_HOSTS));
>
> TridentKafkaConfig spoutConfig = new TridentKafkaConfig(brokers, (String)
> stormConfiguration.get(KAFKA_INPUT_TOPIC));
>
>
> spoutConfig.scheme = getSpoutScheme(stormConfiguration);
>
> Boolean forceFromStart = (Boolean)
> stormConfiguration.get(FORCE_FROM_START);
>
>
> spoutConfig.ignoreZkOffsets = false;
>
> spoutConfig.fetchSizeBytes =
> stormConfiguration.getIntProperty(KAFKA_CONSUMER_FETCH_SIZE_BYTE,
> KAFKA_CONSUMER_DEFAULT_FETCH_SIZE_BYTE);
>
> spoutConfig.bufferSizeBytes =
> stormConfiguration.getIntProperty(KAFKA_CONSUMER_BUFFER_SIZE_BYTE,
> KAFKA_CONSUMER_DEFAULT_BUFFER_SIZE_BYTE);
>
> As per my knowledge, only thing we are doing wrong is topic has 12
> partitions but we are reading using only 1 spout, but that’s limitation on
> our side. I am not sure why its getting halted? It just keep printing below
> lines and does nothing
>
>
> 2015-11-20 05:44:41.574 b.s.m.n.Server [INFO] Getting metrics for server
> on port 6700
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for client
> connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6700
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for client
> connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6709
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for client
> connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6707
>
>
> Thanks,
>
> Sachin
>
>

Re: Why am I getting OffsetOutOfRange: Updating offset from offset?

Posted by Erik Weathers <ew...@groupon.com.INVALID>.
hi Sachin,

As Guy said, your topology is too slow.  The rate you are processing tuples
is slower than the rate of them being deleted from the kafka servers.

There can be innumerable reasons for why this is, but basically you need to
increase the throughput of your topology's processing of the data read from
kafka.  More workers, executors, etc. might help, but don't just do
increase the values thoughtlessly, try doing it with some planning and care.

- Erik

On Sun, Nov 22, 2015 at 10:14 PM, Sachin Pasalkar <
Sachin_Pasalkar@symantec.com> wrote:

> Yes its running for a week now. I was also guessing the same on spout
> lagging just wanted to double check. We have retention hours as 24 hrs. But
> couldn’t make out why topology halts after these kinds of warnings :(
>
>
> -----Original Message-----
> From: Guy Spielberg [mailto:guy@alooma.io]
> Sent: Monday, November 23, 2015 11:18 AM
> To: dev@storm.apache.org
> Subject: Re: Why am I getting OffsetOutOfRange: Updating offset from
> offset?
>
> Hi Sachin,
>
> I guess these "few days of topology run" will be a week?
> Your Kafka spout is probably lagging behind and messages located at the
> offset requested were deleted from the Kafka topic (log.retention.hours),
> causing the following fetch request to throw `OffsetOutOfRange`.
>
> Thanks,
> Guy
>
>
> On Mon, Nov 23, 2015 at 7:25 AM, Sachin Pasalkar <
> Sachin_Pasalkar@symantec.com> wrote:
>
> > Can someone help us on this?
> >
> > From: Sachin Pasalkar <sachin_pasalkar@symantec.com<mailto:
> > sachin_pasalkar@symantec.com>>
> > Reply-To: "dev@storm.apache.org<ma...@storm.apache.org>" <
> > dev@storm.apache.org<ma...@storm.apache.org>>
> > Date: Friday, 20 November 2015 11:53 am
> > To: "dev@storm.apache.org<ma...@storm.apache.org>" <
> > dev@storm.apache.org<ma...@storm.apache.org>>
> > Subject: Why am I getting OffsetOutOfRange: Updating offset from offset?
> >
> > Hi,
> >
> > We are developing application where after some days of topology run,
> > we get continuous warning messages
> >
> >
> > 2015-11-20 05:05:42.226 s.k.KafkaUtils [WARN] Got fetch request with
> > offset out of range: [7238824446]
> >
> > 2015-11-20 05:05:42.229 s.k.t.TridentKafkaEmitter [WARN]
> OffsetOutOfRange:
> > Updating offset from offset = 7238824446 to offset = 7241183683
> >
> > 2015-11-20 05:05:43.207 s.k.KafkaUtils [WARN] Got fetch request with
> > offset out of range: [7022945051]
> >
> > 2015-11-20 05:05:43.208 s.k.t.TridentKafkaEmitter [WARN]
> OffsetOutOfRange:
> > Updating offset from offset = 7022945051 to offset = 7025309343
> >
> > 2015-11-20 05:05:44.260 s.k.KafkaUtils [WARN] Got fetch request with
> > offset out of range: [7170559432]
> >
> > 2015-11-20 05:05:44.264 s.k.t.TridentKafkaEmitter [WARN]
> OffsetOutOfRange:
> > Updating offset from offset = 7170559432 to offset = 7172920769
> >
> > 2015-11-20 05:05:45.332 s.k.KafkaUtils [WARN] Got fetch request with
> > offset out of range: [7132495867]……
> >
> >
> > After some point topology stop processing messages, I need to
> > rebalance it to start it again.
> >
> >
> > My spout config is
> >
> >
> > BrokerHosts brokers = new ZkHosts((String)
> > stormConfiguration.get(ZOOKEEPER_HOSTS));
> >
> > TridentKafkaConfig spoutConfig = new TridentKafkaConfig(brokers,
> > (String) stormConfiguration.get(KAFKA_INPUT_TOPIC));
> >
> >
> > spoutConfig.scheme = getSpoutScheme(stormConfiguration);
> >
> > Boolean forceFromStart = (Boolean)
> > stormConfiguration.get(FORCE_FROM_START);
> >
> >
> > spoutConfig.ignoreZkOffsets = false;
> >
> > spoutConfig.fetchSizeBytes =
> > stormConfiguration.getIntProperty(KAFKA_CONSUMER_FETCH_SIZE_BYTE,
> > KAFKA_CONSUMER_DEFAULT_FETCH_SIZE_BYTE);
> >
> > spoutConfig.bufferSizeBytes =
> > stormConfiguration.getIntProperty(KAFKA_CONSUMER_BUFFER_SIZE_BYTE,
> > KAFKA_CONSUMER_DEFAULT_BUFFER_SIZE_BYTE);
> >
> > As per my knowledge, only thing we are doing wrong is topic has 12
> > partitions but we are reading using only 1 spout, but that’s
> > limitation on our side. I am not sure why its getting halted? It just
> > keep printing below lines and does nothing
> >
> >
> > 2015-11-20 05:44:41.574 b.s.m.n.Server [INFO] Getting metrics for
> > server on port 6700
> >
> > 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for
> > client connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6700
> >
> > 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for
> > client connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6709
> >
> > 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for
> > client connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6707
> >
> >
> > Thanks,
> >
> > Sachin
> >
> >
>

RE: Why am I getting OffsetOutOfRange: Updating offset from offset?

Posted by Sachin Pasalkar <Sa...@symantec.com>.
Yes its running for a week now. I was also guessing the same on spout lagging just wanted to double check. We have retention hours as 24 hrs. But couldn’t make out why topology halts after these kinds of warnings :(


-----Original Message-----
From: Guy Spielberg [mailto:guy@alooma.io] 
Sent: Monday, November 23, 2015 11:18 AM
To: dev@storm.apache.org
Subject: Re: Why am I getting OffsetOutOfRange: Updating offset from offset?

Hi Sachin,

I guess these "few days of topology run" will be a week?
Your Kafka spout is probably lagging behind and messages located at the offset requested were deleted from the Kafka topic (log.retention.hours), causing the following fetch request to throw `OffsetOutOfRange`.

Thanks,
Guy


On Mon, Nov 23, 2015 at 7:25 AM, Sachin Pasalkar < Sachin_Pasalkar@symantec.com> wrote:

> Can someone help us on this?
>
> From: Sachin Pasalkar <sachin_pasalkar@symantec.com<mailto:
> sachin_pasalkar@symantec.com>>
> Reply-To: "dev@storm.apache.org<ma...@storm.apache.org>" < 
> dev@storm.apache.org<ma...@storm.apache.org>>
> Date: Friday, 20 November 2015 11:53 am
> To: "dev@storm.apache.org<ma...@storm.apache.org>" < 
> dev@storm.apache.org<ma...@storm.apache.org>>
> Subject: Why am I getting OffsetOutOfRange: Updating offset from offset?
>
> Hi,
>
> We are developing application where after some days of topology run, 
> we get continuous warning messages
>
>
> 2015-11-20 05:05:42.226 s.k.KafkaUtils [WARN] Got fetch request with 
> offset out of range: [7238824446]
>
> 2015-11-20 05:05:42.229 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7238824446 to offset = 7241183683
>
> 2015-11-20 05:05:43.207 s.k.KafkaUtils [WARN] Got fetch request with 
> offset out of range: [7022945051]
>
> 2015-11-20 05:05:43.208 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7022945051 to offset = 7025309343
>
> 2015-11-20 05:05:44.260 s.k.KafkaUtils [WARN] Got fetch request with 
> offset out of range: [7170559432]
>
> 2015-11-20 05:05:44.264 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7170559432 to offset = 7172920769
>
> 2015-11-20 05:05:45.332 s.k.KafkaUtils [WARN] Got fetch request with 
> offset out of range: [7132495867]……
>
>
> After some point topology stop processing messages, I need to 
> rebalance it to start it again.
>
>
> My spout config is
>
>
> BrokerHosts brokers = new ZkHosts((String) 
> stormConfiguration.get(ZOOKEEPER_HOSTS));
>
> TridentKafkaConfig spoutConfig = new TridentKafkaConfig(brokers, 
> (String) stormConfiguration.get(KAFKA_INPUT_TOPIC));
>
>
> spoutConfig.scheme = getSpoutScheme(stormConfiguration);
>
> Boolean forceFromStart = (Boolean)
> stormConfiguration.get(FORCE_FROM_START);
>
>
> spoutConfig.ignoreZkOffsets = false;
>
> spoutConfig.fetchSizeBytes =
> stormConfiguration.getIntProperty(KAFKA_CONSUMER_FETCH_SIZE_BYTE,
> KAFKA_CONSUMER_DEFAULT_FETCH_SIZE_BYTE);
>
> spoutConfig.bufferSizeBytes =
> stormConfiguration.getIntProperty(KAFKA_CONSUMER_BUFFER_SIZE_BYTE,
> KAFKA_CONSUMER_DEFAULT_BUFFER_SIZE_BYTE);
>
> As per my knowledge, only thing we are doing wrong is topic has 12 
> partitions but we are reading using only 1 spout, but that’s 
> limitation on our side. I am not sure why its getting halted? It just 
> keep printing below lines and does nothing
>
>
> 2015-11-20 05:44:41.574 b.s.m.n.Server [INFO] Getting metrics for 
> server on port 6700
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for 
> client connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6700
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for 
> client connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6709
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for 
> client connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6707
>
>
> Thanks,
>
> Sachin
>
>