You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Stavros Kontopoulos (JIRA)" <ji...@apache.org> on 2017/03/01 08:05:45 UTC

[jira] [Commented] (SPARK-19373) Mesos implementation of spark.scheduler.minRegisteredResourcesRatio looks at acquired cores rather than registerd cores

    [ https://issues.apache.org/jira/browse/SPARK-19373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889719#comment-15889719 ] 

Stavros Kontopoulos commented on SPARK-19373:
---------------------------------------------

[~mgummelt]  +1 for task locality + dynamic allocation. That would mean also decline offers if locality is not satisfied, until you get the appropriate nodes? In other words, it means trying to optimize locality, while getting random offers I guess...

> Mesos implementation of spark.scheduler.minRegisteredResourcesRatio looks at acquired cores rather than registerd cores
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19373
>                 URL: https://issues.apache.org/jira/browse/SPARK-19373
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 1.6.3, 2.0.2, 2.1.0
>            Reporter: Michael Gummelt
>            Assignee: Michael Gummelt
>             Fix For: 2.2.0
>
>
> We're currently using `totalCoresAcquired` to account for registered resources, which is incorrect.  That variable measures the number of cores the scheduler has accepted.  We should be using `totalCoreCount` like the other schedulers do.
> Fixing this is important for locality, since users often want to wait for all executors to come up before scheduling tasks to ensure they get a node-local placement. 
> original PR to add support: https://github.com/apache/spark/pull/8672/files



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org