You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Zhinan Cheng (Jira)" <ji...@apache.org> on 2020/08/21 02:09:00 UTC

[jira] [Created] (FLINK-19009) wrong way to calculate the "downtime" metric

Zhinan Cheng created FLINK-19009:
------------------------------------

             Summary: wrong way to calculate the "downtime" metric
                 Key: FLINK-19009
                 URL: https://issues.apache.org/jira/browse/FLINK-19009
             Project: Flink
          Issue Type: Bug
          Components: Runtime / Metrics
    Affects Versions: 1.11.1
         Environment: I found the problem in version 1.7.2 and I check the latest source code and the latest doc in 1.11, the problem still exists.
            Reporter: Zhinan Cheng


Currently the way to calculate the Flink system metric "downtime"  is not consistent with the description in the doc, now the downtime is actually the current timestamp minus the time timestamp when the job started.
   
But Flink doc (https://flink.apache.org/gettinghelp.html) obviously describes the time as the current timestamp minus the timestamp when the job failed.
 
I believe we should update the code this metric as the Flink doc shows. The easy way to solve this is using the current timestamp to minus the latest uptime timestamp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)