You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Masiero Vanzin (Jira)" <ji...@apache.org> on 2019/11/18 18:00:12 UTC

[jira] [Created] (SPARK-29950) Deleted excess executors can connect back to driver in K8S with dyn alloc on

Marcelo Masiero Vanzin created SPARK-29950:
----------------------------------------------

             Summary: Deleted excess executors can connect back to driver in K8S with dyn alloc on
                 Key: SPARK-29950
                 URL: https://issues.apache.org/jira/browse/SPARK-29950
             Project: Spark
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 3.0.0
            Reporter: Marcelo Masiero Vanzin


{{ExecutorPodsAllocator}} currently has code to delete excess pods that the K8S server hasn't started yet, and aren't needed anymore due to downscaling.

The problem is that there is a race between K8S starting the pod and the Spark code deleting it. This may cause the pod to connect back to Spark and do a lot of initialization, sometimes even being considered for task allocation, just to be killed almost immediately.

This doesn't cause any problems that I could detect in my tests, but wastes resources, and causes logs to contains misleading messages about the executor being killed. It would be nice to avoid that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org