You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@storm.apache.org by "Jungtaek Lim (JIRA)" <ji...@apache.org> on 2017/02/03 02:55:52 UTC

[jira] [Resolved] (STORM-2014) New Kafka spout duplicates checking if failed messages have reached max retries

     [ https://issues.apache.org/jira/browse/STORM-2014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jungtaek Lim resolved STORM-2014.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 1.1.0
                   2.0.0

Thanks [~Srdo], I merged this into master and 1.x branch.

> New Kafka spout duplicates checking if failed messages have reached max retries
> -------------------------------------------------------------------------------
>
>                 Key: STORM-2014
>                 URL: https://issues.apache.org/jira/browse/STORM-2014
>             Project: Apache Storm
>          Issue Type: Improvement
>          Components: storm-kafka
>            Reporter: Stig Rohde Døssing
>            Assignee: Stig Rohde Døssing
>            Priority: Minor
>             Fix For: 2.0.0, 1.1.0
>
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The new Kafka spout has a RetryService interface that should make logic around retrying tuples pluggable. The RetryServiceExponentialBackoff class has code for setting a max retry count, and dropping messages once they reach the retry limit. This functionality is duplicated by the spout in the fail method, which means that the user must set different maxRetries for the RetryService and the spout in order for the RetryService code to be hit when dropping messages.
> I think the retry logic belongs in the RetryService interface, and should be removed from the spout. It would also be good if the RetryService could indicate if a message will be retried or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)