You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/03/11 11:55:14 UTC

[GitHub] [flink] zentol commented on pull request #19047: [FLINK-26583][runtime] Make the Flink cluster fail if the job ID is listed as globally-terminated and cleaned

zentol commented on pull request #19047:
URL: https://github.com/apache/flink/pull/19047#issuecomment-1065045198


   > The rational behind this change is to make the user aware of the fact that there's still a JobResultEntry laying around. Flink itself is behaving as expected. Alternatively, we could add a warning. I was just afraid that the user might not notice it and, therefore, went for the exception approach, instead.
   
   Will in practice, if say you use Kubernetes, this not result in the job being re-submitted again and again because the cluster fails until some failure rate policy is triggered? I'm not sure if this is the better alternative.
   In particular because this can happen without the user doing something wrong; say the JM crashes after having cleaned the job result. In that situation we very much want Flink to just shut down.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org