You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@storm.apache.org by "Rick Kellogg (JIRA)" <ji...@apache.org> on 2015/09/26 04:24:07 UTC

[jira] [Updated] (STORM-495) Add delayed retries to KafkaSpout

     [ https://issues.apache.org/jira/browse/STORM-495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Rick Kellogg updated STORM-495:
-------------------------------
    Component/s: storm-kafka

> Add delayed retries to KafkaSpout
> ---------------------------------
>
>                 Key: STORM-495
>                 URL: https://issues.apache.org/jira/browse/STORM-495
>             Project: Apache Storm
>          Issue Type: Improvement
>          Components: storm-kafka
>    Affects Versions: 0.9.3
>         Environment: all environments
>            Reporter: Rick Kilgore
>            Assignee: Rick Kilgore
>            Priority: Minor
>              Labels: kafka, retry
>             Fix For: 0.10.0
>
>
> If a tuple in the topology originates from the KafkaSpout from the external/storm-kafka sources, and if a bolt in the topology indicates a failure by calling fail() on its OutputCollector, the KafkaSpout will immediately retry the message.
> We wish to use this failure and retry behavior in our ingestion system whenever we experience a recoverable error from a downstream system, such as a 500 or 503 error from a service we depend on.  But with the current KafkaSpout behavior, doing so results in a tight loop where we retry several times over a few seconds and then give up.  I want to be able to delay retry to give the downstream service some time to recover.  Ideally, I would like to have configurable, exponential backoff retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)