You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yang Wang (Jira)" <ji...@apache.org> on 2020/05/08 08:03:00 UTC

[jira] [Commented] (FLINK-17566) Fix potential K8s resources leak after JobManager finishes in Applicaion mode

    [ https://issues.apache.org/jira/browse/FLINK-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17102352#comment-17102352 ] 

Yang Wang commented on FLINK-17566:
-----------------------------------

https://github.com/fabric8io/kubernetes-client/issues/2209

I have created a ticket in fabric8 kubernetes-client project and get the conclusion that the deleting deployment is just one API request to Kubernetes APIServer. Maybe there is some other reason for the residual resources.

> Fix potential K8s resources leak after JobManager finishes in Applicaion mode
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-17566
>                 URL: https://issues.apache.org/jira/browse/FLINK-17566
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes
>            Reporter: Canbin Zheng
>            Priority: Major
>
> FLINK-10934 introduces applicaion mode support in the native K8s setups., but as the discussion in [https://github.com/apache/flink/pull/12003|https://github.com/apache/flink/pull/12003,], there's large probability that all the K8s resources leak after the JobManager finishes except that the replica of Deployment is scaled down to 0. We need to find out the root cause and fix it.
> This may be related to the way fabric8 SDK deletes a Deployment. It splits the procedure into three steps as follows:
>  # Scales down the replica to 0
>  # Wait until the scaling down succeed
>  # Delete the ReplicaSet
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)