You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2020/03/17 09:21:00 UTC

[jira] [Resolved] (SPARK-28022) k8s pod affinity to achieve cloud native friendly autoscaling

     [ https://issues.apache.org/jira/browse/SPARK-28022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dongjoon Hyun resolved SPARK-28022.
-----------------------------------
    Resolution: Invalid

From Spark side, the suggested idea looks invalid to me. You may achieve it from K8s scheduler.

BTW, for dynamic allocation, SPARK-20628 might be the better solution we have. It's already merged.

> k8s pod affinity to achieve cloud native friendly autoscaling 
> --------------------------------------------------------------
>
>                 Key: SPARK-28022
>                 URL: https://issues.apache.org/jira/browse/SPARK-28022
>             Project: Spark
>          Issue Type: New Feature
>          Components: Kubernetes
>    Affects Versions: 3.0.0
>            Reporter: Henry Yu
>            Priority: Major
>
> Hi, in order to achieve cloud native friendly autoscaling , I propose to add a pod affinity feature.
> Traditionally, when we use spark in fix size yarn cluster, it make sense to spread containers to every node.
> Coming to cloud native resource manage, we want to release node when we don't need it any more.
> Pod affinity feature counts to place all pods of certain application to some nodes instead of all nodes.
> By the way,  using pod template is not a good choice, adding application id  to pod affinity term when submit is more robust.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org