You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Konstantin Kudryavtsev <ku...@gmail.com> on 2014/07/17 11:33:05 UTC
Spark scheduling with Capacity scheduler
Hi all,
I'm using HDP 2.0, YARN. I'm running both MapReduce and Spark jobs on this
cluster, is it possible somehow use Capacity scheduler for Spark jobs
management as well as MR jobs? I mean, I'm able to send MR job to specific
queue, may I do the same with Spark job?
thank you in advance
Thank you,
Konstantin Kudryavtsev
Re: Spark scheduling with Capacity scheduler
Posted by Derek Schoettle <ds...@us.ibm.com>.
unsubscribe
From: Matei Zaharia <ma...@gmail.com>
To: user@spark.apache.org
Date: 07/17/2014 12:41 PM
Subject: Re: Spark scheduling with Capacity scheduler
It's possible using the --queue argument of spark-submit. Unfortunately
this is not documented on
http://spark.apache.org/docs/latest/running-on-yarn.html but it appears if
you just type spark-submit --help or spark-submit with no arguments.
Matei
On Jul 17, 2014, at 2:33 AM, Konstantin Kudryavtsev
<ku...@gmail.com> wrote:
> Hi all,
>
> I'm using HDP 2.0, YARN. I'm running both MapReduce and Spark jobs on
this cluster, is it possible somehow use Capacity scheduler for Spark jobs
management as well as MR jobs? I mean, I'm able to send MR job to specific
queue, may I do the same with Spark job?
> thank you in advance
>
> Thank you,
> Konstantin Kudryavtsev
Re: Spark scheduling with Capacity scheduler
Posted by Matei Zaharia <ma...@gmail.com>.
It's possible using the --queue argument of spark-submit. Unfortunately this is not documented on http://spark.apache.org/docs/latest/running-on-yarn.html but it appears if you just type spark-submit --help or spark-submit with no arguments.
Matei
On Jul 17, 2014, at 2:33 AM, Konstantin Kudryavtsev <ku...@gmail.com> wrote:
> Hi all,
>
> I'm using HDP 2.0, YARN. I'm running both MapReduce and Spark jobs on this cluster, is it possible somehow use Capacity scheduler for Spark jobs management as well as MR jobs? I mean, I'm able to send MR job to specific queue, may I do the same with Spark job?
> thank you in advance
>
> Thank you,
> Konstantin Kudryavtsev