You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mesos.apache.org by Andreas Tsarida <an...@teralytics.ch> on 2016/04/13 18:53:17 UTC

Problems with scheduling tasks in mesos and spark

Hello,

I’m trying to figure out a solution for dynamic resource allocation in mesos within the same framework ( spark ).

Scenario :
1 - run spark a job in coarse mode
2 - run second job in coarse mode

Second job will not start unless first job finishes which is not something that I would want. The problem is small when the job running doesn’t take too long but when it does nobody can work on the cluster.

Best scenario would be to have mesos revoke resources from the first job and try to allocate resources to the second job.

If there anybody else who solved this issue in another way ?

Thanks

Re: Problems with scheduling tasks in mesos and spark

Posted by Hans van den Bogert <ha...@gmail.com>.
Hi, 

This is a hard problem to solve atm if your requirement is that you really need Spark to operate in Coarse-grained mode.
I assume this is a problem because you are trying to run two spark-applications (as apposed to two jobs in one applications).

Obvious “solutions” would be that you could run both applications in fine-grained mode. 
You could also try if both jobs can be submitted through the same spark context where its job scheduler would be set to FAIR (the default is FIFO.) However, I don’t have enough context information to know if this latter option would be applicable for you.

If you need more help, please provide some context of what you’re trying to achieve.

Regards,

Hans

> On Apr 13, 2016, at 6:53 PM, Andreas Tsarida <an...@teralytics.ch> wrote:
> 
> 
> Hello,
> 
> I’m trying to figure out a solution for dynamic resource allocation in mesos within the same framework ( spark ).
> 
> Scenario :
> 1 - run spark a job in coarse mode
> 2 - run second job in coarse mode
> 
> Second job will not start unless first job finishes which is not something that I would want. The problem is small when the job running doesn’t take too long but when it does nobody can work on the cluster.
> 
> Best scenario would be to have mesos revoke resources from the first job and try to allocate resources to the second job.
> 
> If there anybody else who solved this issue in another way ?
> 
> Thanks


Re: Problems with scheduling tasks in mesos and spark

Posted by Shuai Lin <li...@gmail.com>.
Have you tried setting the "spark.cores.max" in sparkconf? Check
http://spark.apache.org/docs/1.6.1/running-on-mesos.html :

 You can cap the maximum number of cores using conf.set("spark.cores.max",
> "10") (for example).


On Thu, Apr 14, 2016 at 12:53 AM, Andreas Tsarida <
andreas.tsarida@teralytics.ch> wrote:

>
> Hello,
>
> I’m trying to figure out a solution for dynamic resource allocation in
> mesos within the same framework ( spark ).
>
> Scenario :
> 1 - run spark a job in coarse mode
> 2 - run second job in coarse mode
>
> Second job will not start unless first job finishes which is not something
> that I would want. The problem is small when the job running doesn’t take
> too long but when it does nobody can work on the cluster.
>
> Best scenario would be to have mesos revoke resources from the first job
> and try to allocate resources to the second job.
>
> If there anybody else who solved this issue in another way ?
>
> Thanks
>