You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/05/01 19:51:12 UTC

[jira] [Commented] (FLINK-3190) Retry rate limits for DataStream API

    [ https://issues.apache.org/jira/browse/FLINK-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265851#comment-15265851 ] 

ASF GitHub Bot commented on FLINK-3190:
---------------------------------------

GitHub user fijolekProjects opened a pull request:

    https://github.com/apache/flink/pull/1954

    [FLINK-3190] failure rate restart strategy

    Failure rate restart strategy - job should only die, if it fails too often in a given time frame

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/fijolekProjects/flink FLINK-3190

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/1954.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1954
    
----
commit 7b409b3de71b15fd927cf0c46b11c1d342d5d03d
Author: Michal Fijolek <mi...@gmail.com>
Date:   2016-03-13T00:40:15Z

    [FLINK-3190] failure rate restart strategy

----


> Retry rate limits for DataStream API
> ------------------------------------
>
>                 Key: FLINK-3190
>                 URL: https://issues.apache.org/jira/browse/FLINK-3190
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Sebastian Klemke
>            Assignee: Michał Fijołek
>            Priority: Minor
>
> For a long running stream processing job, absolute numbers of retries don't make much sense: The job will accumulate transient errors over time and will die eventually when thresholds are exceeded. Rate limits are better suited in this scenario: A job should only die, if it fails too often in a given time frame. To better overcome transient errors, retry delays could be used, as suggested in other issues.
> Absolute numbers of retries can still make sense, if failing operators don't make any progress at all. We can measure progress by OperatorState changes and by observing output, as long as the operator in question is not a sink. If operator state changes and/or operator produces output, we can assume it makes progress.
> As an example, let's say we configured a retry rate limit of 10 retries per hour and a non-sink operator A. If the operator fails once every 10 minutes and produces output between failures, it should not lead to job termination. But if the operator fails 11 times in an hour or does not produce output between 11 consecutive failures, job should be terminated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)