You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Sebastian Klemke (JIRA)" <ji...@apache.org> on 2015/12/23 15:03:46 UTC

[jira] [Created] (FLINK-3190) Retry rate limits for DataStream API

Sebastian Klemke created FLINK-3190:
---------------------------------------

             Summary: Retry rate limits for DataStream API
                 Key: FLINK-3190
                 URL: https://issues.apache.org/jira/browse/FLINK-3190
             Project: Flink
          Issue Type: Improvement
            Reporter: Sebastian Klemke
            Priority: Minor


For a long running stream processing job, absolute numbers of retries don't make much sense: The job will accumulate transient errors over time and will die eventually when thresholds are exceeded. Rate limits are better suited in this scenario: A job should only die, if it fails too often in a given time frame. To better overcome transient errors, retry delays could be used, as suggested in other issues.

Absolute numbers of retries can still make sense, if failing operators don't make any progress at all. We can measure progress by OperatorState changes and by observing output, as long as the operator in question is not a sink. If operator state changes and/or operator produces output, we can assume it makes progress.

As an example, let's say we configured a retry rate limit of 10 retries per hour and a non-sink operator A. If the operator fails once every 10 minutes and produces output between failures, it should not lead to job termination. But if the operator fails 11 times in an hour or does not produce output between 11 consecutive failures, job should be terminated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)