You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ZhuoYu Chen (Jira)" <ji...@apache.org> on 2021/10/18 15:31:00 UTC
[jira] [Commented] (FLINK-21884) Reduce TaskManager failure
detection time
[ https://issues.apache.org/jira/browse/FLINK-21884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17430068#comment-17430068 ]
ZhuoYu Chen commented on FLINK-21884:
-------------------------------------
Hi [~rmetzger] , I am very interested in this,and I want do some job for flink,can I help to do that?
Thank you
> Reduce TaskManager failure detection time
> -----------------------------------------
>
> Key: FLINK-21884
> URL: https://issues.apache.org/jira/browse/FLINK-21884
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Coordination
> Affects Versions: 1.14.0, 1.13.2
> Reporter: Robert Metzger
> Priority: Critical
> Labels: reactive
> Fix For: 1.15.0
>
> Attachments: image-2021-03-19-20-10-40-324.png
>
>
> In Flink 1.13 (and older versions), TaskManager failures stall the processing for a significant amount of time, even though the system gets indications for the failure almost immediately through network connection losses.
> This is due to a high (default) heartbeat timeout of 50 seconds [1] to accommodate for GC pauses, transient network disruptions or generally slow environments (otherwise, we would unregister a healthy TaskManager).
> Such a high timeout can lead to disruptions in the processing (no processing for certain periods, high latencies, buildup of consumer lag etc.). In Reactive Mode (FLINK-10407), the issue surfaces on scale-down events, where the loss of a TaskManager is immediately visible in the logs, but the job is stuck in "FAILING" for quite a while until the TaskManger is really deregistered. (Note that this issue is not that critical in a autoscaling setup, because Flink can control the scale-down events and trigger them proactively)
> On the attached metrics dashboard, one can see that the job has significant throughput drops / consumer lags during scale down (and also CPU usage spikes on processing the queued events, leading to incorrect scale up events again).
> !image-2021-03-19-20-10-40-324.png|thumbnail!
> One idea to solve this problem is to:
> - Score TaskManagers based on certain signals (# exceptions reported, exception types (connection losses, akka failures), failure frequencies, ...) and blacklist them accordingly.
> - Introduce a best-effort TaskManager unregistration mechanism: When a TaskManager receives a sigterm, it sends a final message to the JobManager saying "goodbye", and the JobManager can immediately remove the TM from its bookkeeping.
> [1] https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout
--
This message was sent by Atlassian Jira
(v8.3.4#803005)