You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2022/05/20 03:47:00 UTC

[jira] [Closed] (SPARK-39023) Add Executor Pod inter-pod anti-affinity

     [ https://issues.apache.org/jira/browse/SPARK-39023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dongjoon Hyun closed SPARK-39023.
---------------------------------

> Add Executor Pod inter-pod anti-affinity
> ----------------------------------------
>
>                 Key: SPARK-39023
>                 URL: https://issues.apache.org/jira/browse/SPARK-39023
>             Project: Spark
>          Issue Type: New Feature
>          Components: Kubernetes
>    Affects Versions: 3.2.1
>            Reporter: binjie yang
>            Priority: Major
>
> h3. Why should we need this?
> When Spark On Kubernetes is running, Executor Pod clusters occur in certain conditions (uneven resource allocation in Kubernetes, high load On some nodes, low load On some nodes), causing Shuffle data skew. This causes Spark application to fail or performance bottlenecks, such as Shuffle Fetch timeout and connection refuse after connection number.
> h3. Why should use this?
> The functionality mentioned in this PR was tested on a cluster.
> Using three Kubernetes Node(node-1, node-2, node-3).
> In the case of sufficient or insufficient cluster resources, it  has the same effect Whether the feature is enabled or not. Kubernetes assigns pods to nodes with low load based on global resources. When only one node has a small load, Kubernetes will schedule all executor pods to this node. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org