You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Thomas Graves (Jira)" <ji...@apache.org> on 2020/01/10 14:34:00 UTC

[jira] [Assigned] (SPARK-30448) accelerator aware scheduling enforce cores as limiting resource

     [ https://issues.apache.org/jira/browse/SPARK-30448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Thomas Graves reassigned SPARK-30448:
-------------------------------------

    Assignee: Thomas Graves

> accelerator aware scheduling enforce cores as limiting resource
> ---------------------------------------------------------------
>
>                 Key: SPARK-30448
>                 URL: https://issues.apache.org/jira/browse/SPARK-30448
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Thomas Graves
>            Assignee: Thomas Graves
>            Priority: Major
>
> For the first version of accelerator aware scheduling(SPARK-27495), the SPIP had a condition that we can support dynamic allocation because we were going to have a strict requirement that we don't waste any resources. This means that the number of number of slots each executor has could be calculated from the number of cores and task cpus just as is done today.
> Somewhere along the line of development we relaxed that and only warn when we are wasting resources. This breaks the dynamic allocation logic if the limiting resource is no longer the cores.  This means we will request less executors then we really need to run everything.
> We have to enforce that cores is always the limiting resource so we should throw if its not.
> I guess we could only make this a requirement with dynamic allocation on, but to make the behavior consistent I would say we just require it across the board.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org