You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lenin (JIRA)" <ji...@apache.org> on 2018/04/26 17:50:00 UTC

[jira] [Created] (SPARK-24105) Spark 2.3.0 on kubernetes

Lenin created SPARK-24105:
-----------------------------

             Summary: Spark 2.3.0 on kubernetes
                 Key: SPARK-24105
                 URL: https://issues.apache.org/jira/browse/SPARK-24105
             Project: Spark
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 2.3.0
            Reporter: Lenin


Right now its only possible to define node selector configurations thruspark.kubernetes.node.selector.[labelKey]. This gets used for both driver & executor pods. Without the capability to isolate driver & executor pods, the cluster can run into a deadlock scenario, where if there are a lot of spark submits, can cause the driver pods to fill up the cluster capacity, with no room for executor pods to do any work.

 

To avoid this deadlock, its required to support node selector (in future affinity/anti-affinity) configruation by driver & executor.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org