You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by purna pradeep <pu...@gmail.com> on 2018/05/23 15:33:56 UTC

Spark driver pod garbage collection

Hello,

Currently I observe dead pods are not getting garbage collected (aka spark
driver pods which have completed execution). So pods could sit in the
namespace for weeks potentially. This makes listing, parsing, and reading
pods slower and well as having junk sit on the cluster.

I believe minimum-container-ttl-duration kubelet flag is by default set to
0 minute but I don’t see the completed spark driver pods are garbage
collected

Do I need to set any flag explicitly @ kubelet level?

Re: Spark driver pod garbage collection

Posted by Anirudh Ramanathan <ra...@google.com.INVALID>.
There's a flag to the controller manager that is in charge of retention
policy for terminated or completed pods.

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options
--terminated-pod-gc-threshold int32     Default: 12500
Number of terminated pods that can exist before the terminated pod garbage
collector starts deleting terminated pods. If <= 0, the terminated pod
garbage collector is disabled.

On Wed, May 23, 2018, 8:34 AM purna pradeep <pu...@gmail.com> wrote:

> Hello,
>
> Currently I observe dead pods are not getting garbage collected (aka spark
> driver pods which have completed execution). So pods could sit in the
> namespace for weeks potentially. This makes listing, parsing, and reading
> pods slower and well as having junk sit on the cluster.
>
> I believe minimum-container-ttl-duration kubelet flag is by default set to
> 0 minute but I don’t see the completed spark driver pods are garbage
> collected
>
> Do I need to set any flag explicitly @ kubelet level?
>
>