You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jongyoul Lee (JIRA)" <ji...@apache.org> on 2015/01/08 06:35:34 UTC

[jira] [Comment Edited] (SPARK-4922) Support dynamic allocation for coarse-grained Mesos

    [ https://issues.apache.org/jira/browse/SPARK-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14268826#comment-14268826 ] 

Jongyoul Lee edited comment on SPARK-4922 at 1/8/15 5:35 AM:
-------------------------------------------------------------

[~andrewor14] Hi, I have a basic question about your idea. I'm using fine-grained mesos for running my jobs. that mode already allocate resources dynamically when task scheduler wants. What you think the difference is between your idea and fine-grained mode? Unlike coarse-grained mode, fine-grained mode adjusts # of cores for a executor and enables to make two more executor on each slave. I think if we set # of cores for each mesos executor in a configuration on fine-grained mode - now, only one core fixed for each executor -, we can satisfy dynamic allocation idea. and I read SPARK-4751, and I can handle this issue via using fine-grain mode. And how do you think you adjust resources? new API for increasing or decreasing cores or just use {{spark.cores.max}}?


was (Author: jongyoul):
[~andrewor14] Hi, I have a basic question about your idea. I'm using fine-grained mesos for running my jobs. that mode already allocate resources dynamically when task scheduler wants. What you think the difference is between your idea and fine-grained mode? Unlike coarse-grained mode, fine-grained mode adjusts # of cores for a executor and enables to make two more executor on each slave. I think if we set # of cores for each mesos executor in a configuration on fine-grained mode - now, only one core fixed for each executor -, we can satisfy dynamic allocation idea. and I read SPARK-4751, and I'll handle this issue via using fine-grain mode. And how do you think you adjust resources? new API for increasing or decreasing cores or just use {{spark.cores.max}}?

> Support dynamic allocation for coarse-grained Mesos
> ---------------------------------------------------
>
>                 Key: SPARK-4922
>                 URL: https://issues.apache.org/jira/browse/SPARK-4922
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 1.2.0
>            Reporter: Andrew Or
>            Priority: Critical
>
> This brings SPARK-3174, which provided dynamic allocation of cluster resources to Spark on YARN applications, to Mesos coarse-grained mode. 
> Note that the translation is not as trivial as adding a code path that exposes the request and kill mechanisms as we did for YARN is SPARK-3822. This is because Mesos coarse-grained mode schedules on the notion of setting the number of cores allowed for an application (as in standalone mode) instead of number of executors (as in YARN mode). For more detail, please see SPARK-4751.
> If you intend to work on this, please provide a detailed design doc!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org