You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Luca Bruno (JIRA)" <ji...@apache.org> on 2016/04/29 10:09:12 UTC
[jira] [Comment Edited] (SPARK-14977) Fine grained mode in Mesos is
not fair
[ https://issues.apache.org/jira/browse/SPARK-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15263720#comment-15263720 ]
Luca Bruno edited comment on SPARK-14977 at 4/29/16 8:08 AM:
-------------------------------------------------------------
Thanks for the reply.
Yes, they are long running. The two spark frameworks however keep creating new tasks after old tasks end. I thought that when such tasks end, resources were released back to mesos? Perhaps I'm wrong, and resources are allocated to the framework instead of per-task, so mesos cannot offer them to a different framework?
For now we've changed to coarse grained, but it's uglier than fine grained. The reason is that a spark job will always use 2gb, regardless of idle resources in the cluster.
was (Author: lethalman):
Thanks for the reply.
Yes, they are long running. The two spark frameworks however keep creating new tasks after old tasks end. I thought that when such tasks end, resources were released back to mesos? Perhaps I'm wrong, and resources are allocated to the framework instead of per-task, so mesos cannot offer them to a different framework?
For now we've changed to coarse scheduling, but it's uglier than fine grained. The reason is that a spark job will always use 2gb, regardless of idle resources in the cluster.
> Fine grained mode in Mesos is not fair
> --------------------------------------
>
> Key: SPARK-14977
> URL: https://issues.apache.org/jira/browse/SPARK-14977
> Project: Spark
> Issue Type: Bug
> Components: Mesos
> Affects Versions: 2.1.0
> Environment: Spark commit db75ccb, Debian jessie, Mesos fine grained
> Reporter: Luca Bruno
>
> I've setup a mesos cluster and I'm running spark in fine grained mode.
> Spark defaults to 2 executor cores and 2gb of ram.
> The total mesos cluster has 8 cores and 8gb of ram.
> When I submit two spark jobs simultaneously, spark will always accept full resources, leading the two frameworks to use 4gb of ram each instead of 2gb.
> If I submit another spark job, it will not get offered resources from mesos, at least using the default HierarchicalDRF allocator module.
> Mesos will keep offering 4gb of ram to earlier spark jobs, and spark keeps accepting full resources for every new task.
> Hence new spark jobs have no chance of getting a share.
> Is this something to be solved with a custom mesos allocator? Or spark should be more fair instead? Or maybe provide a configuration option to always accept with the minimum resources?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org