You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by "Xin Wang (JIRA)" <ji...@apache.org> on 2016/06/20 03:38:05 UTC

[jira] [Closed] (STORM-394) Messages has expired, OFFSET_OUT_OF_RANGE, new offset startOffsetTime, no new messages, again and again

     [ https://issues.apache.org/jira/browse/STORM-394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xin Wang closed STORM-394.
--------------------------
       Resolution: Fixed
         Assignee: Xin Wang
    Fix Version/s: 0.9.3

> Messages has expired, OFFSET_OUT_OF_RANGE, new offset startOffsetTime, no new messages, again and again
> -------------------------------------------------------------------------------------------------------
>
>                 Key: STORM-394
>                 URL: https://issues.apache.org/jira/browse/STORM-394
>             Project: Apache Storm
>          Issue Type: Bug
>          Components: storm-kafka
>    Affects Versions: 0.9.1-incubating
>            Reporter: Vladislav Pernin
>            Assignee: Xin Wang
>             Fix For: 0.9.3
>
>
> Issue created here (https://github.com/wurstmeister/storm-kafka-0.8-plus/issues/55) but closed since the module is maintened under the Storm umbrella now.
> I think there might be a case that is not covered :
> 0) messages in Kafka has expired
> 1) so offset stored in Zookeeper are no longer valid
> 2) error OFFSET_OUT_OF_RANGE is thrown
> 3) getOffset with startOffsetTime
> 4) retry the fetch with the returned startOffset
> 5) get an ByteBufferMessageSet but empty
> KafkaUtils.fetchMessages seeems to be called again and again with the old offset and we get to step 2 again.
> I guess the new startOffset is not commited to Zookeeper since we do not have new messages.
> This can happen in the case of a topology restart, so it goes through the TridentKafkaEmitter.reEmitPartitionBatch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)