You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Weiwei Yang (Jira)" <ji...@apache.org> on 2021/10/18 23:09:00 UTC

[jira] [Created] (SPARK-37049) executorIdleTimeout is not working for pending pods on K8s

Weiwei Yang created SPARK-37049:
-----------------------------------

             Summary: executorIdleTimeout is not working for pending pods on K8s
                 Key: SPARK-37049
                 URL: https://issues.apache.org/jira/browse/SPARK-37049
             Project: Spark
          Issue Type: Bug
          Components: Kubernetes, Spark Core
    Affects Versions: 3.1.0
            Reporter: Weiwei Yang


SPARK-33099 added the support to respect "spark.dynamicAllocation.executorIdleTimeout" in ExecutorPodsAllocator. However, when it checks if a pending executor pod is timed out, it checks against the pod's "startTime". A pending pod "startTime" is empty, and this causes the function "isExecutorIdleTimedOut()" always return true for pending pods.

This caused the issue, pending pods are deleted immediately when a stage is finished and several new pods got recreated again in the next stage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org