You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by purna pradeep <pu...@gmail.com> on 2018/05/22 14:55:18 UTC

Spark driver pod eviction Kubernetes

Hi,

What would be the recommended approach to wait for spark driver pod to
complete the currently running job before it gets evicted to new nodes
while maintenance on the current node is goingon (kernel upgrade,hardware
maintenance etc..) using drain command

I don’t think I can use PoDisruptionBudget as Spark pods deployment yaml(s)
is taken by Kubernetes

Please suggest !

Re: Spark driver pod eviction Kubernetes

Posted by Anirudh Ramanathan <ra...@google.com.INVALID>.
I think a pod disruption budget might actually work here. It can select the
spark driver pod using a label. Using that with a minAvailable value that's
appropriate here could do it.

In a more general sense, we do plan on some future work to support driver
recovery which should help long running jobs to restart without losing
progress.

On Tue, May 22, 2018, 7:55 AM purna pradeep <pu...@gmail.com> wrote:

> Hi,
>
> What would be the recommended approach to wait for spark driver pod to
> complete the currently running job before it gets evicted to new nodes
> while maintenance on the current node is goingon (kernel upgrade,hardware
> maintenance etc..) using drain command
>
> I don’t think I can use PoDisruptionBudget as Spark pods deployment
> yaml(s) is taken by Kubernetes
>
> Please suggest !
>
>
>