You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sun Rui (JIRA)" <ji...@apache.org> on 2016/09/13 09:21:22 UTC
[jira] [Updated] (SPARK-17522) [MESOS] More even distribution of
executors on Mesos cluster
[ https://issues.apache.org/jira/browse/SPARK-17522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sun Rui updated SPARK-17522:
----------------------------
Description:
The MesosCoarseGrainedSchedulerBackend launch executors in a round-robin way among accepted offers that are received at once, but it is observed that typically executors are launched on a small number of slaves.
It is found that MesosCoarseGrainedSchedulerBackend mostly is receiving only one offer once on a cluster composed of many nodes, so that the round-robin assignment of executors among offers do not have expected result, which leads to the fact that executors are located on a smaller number of slave nodes than expected, which suffers bad data locality.
An experimental slight change to MesosCoarseGrainedSchedulerBackend::buildMesosTasks() shows better executor distribution among nodes:
{code}
while (launchTasks) {
launchTasks = false
for (offer <- offers) {
...
}
+ if (conf.getBoolean("spark.deploy.spreadOut", true)) {
+ launchTasks = false
+ }
}
tasks.toMap
{code}
One of my spark programs can run 30% faster due to this change because of better data locality.
was:
The MesosCoarseGrainedSchedulerBackend launch executors in a round-robin way among accepted offers that are received at once, but it is observed that typically executors are launched on a small number of slaves.
It is found that MesosCoarseGrainedSchedulerBackend mostly is receiving only one offer once on a cluster composed of many nodes, so that the round-robin assignment of executors among offers do not have expected result, which leads to the fact that executors are located on a smaller number of slave nodes than expected, which suffers bad data locality.
An experimental slight change to MesosCoarseGrainedSchedulerBackend::buildMesosTasks() shows better executor distribution among nodes:
{code}
while (launchTasks) {
launchTasks = false
for (offer <- offers) {
...
}
+ if (conf.getBoolean("spark.deploy.spreadOut", true)) {
+ launchTasks = false
+ }
}
tasks.toMap
{code}
A spark program can run 30% faster due to this change because of better data locality.
> [MESOS] More even distribution of executors on Mesos cluster
> ------------------------------------------------------------
>
> Key: SPARK-17522
> URL: https://issues.apache.org/jira/browse/SPARK-17522
> Project: Spark
> Issue Type: Improvement
> Components: Mesos
> Affects Versions: 2.0.0
> Reporter: Sun Rui
>
> The MesosCoarseGrainedSchedulerBackend launch executors in a round-robin way among accepted offers that are received at once, but it is observed that typically executors are launched on a small number of slaves.
> It is found that MesosCoarseGrainedSchedulerBackend mostly is receiving only one offer once on a cluster composed of many nodes, so that the round-robin assignment of executors among offers do not have expected result, which leads to the fact that executors are located on a smaller number of slave nodes than expected, which suffers bad data locality.
> An experimental slight change to MesosCoarseGrainedSchedulerBackend::buildMesosTasks() shows better executor distribution among nodes:
> {code}
> while (launchTasks) {
> launchTasks = false
> for (offer <- offers) {
> ...
> }
> + if (conf.getBoolean("spark.deploy.spreadOut", true)) {
> + launchTasks = false
> + }
> }
> tasks.toMap
> {code}
> One of my spark programs can run 30% faster due to this change because of better data locality.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org