You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Teng Qiu <te...@gmail.com> on 2016/07/15 16:15:28 UTC

standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

Hi,

http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
The standalone cluster mode currently only supports a simple FIFO
scheduler across applications.

is this sentence still true? any progress on this? it will really
helpful. some roadmap?

Thanks

Teng

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

Posted by Michael Gummelt <mg...@mesosphere.io>.
DC/OS was designed to reduce the operational cost of maintaining a cluster,
and DC/OS Spark runs well on it.

On Sat, Jul 16, 2016 at 11:11 AM, Teng Qiu <te...@gmail.com> wrote:

> Hi Mark, thanks, we just want to keep our system as simple as
> possible, using YARN means we need to maintain a full-size hadoop
> cluster, we are using s3 as storage layer, so HDFS is not needed, a
> hadoop cluster is a little bit overkill, mesos is an option, but
> still, it brings extra operation costs.
>
> So... any suggestion from you?
>
> Thanks
>
>
> 2016-07-15 18:51 GMT+02:00 Mark Hamstra <ma...@clearstorydata.com>:
> > Nothing has changed in that regard, nor is there likely to be "progress",
> > since more sophisticated or capable resource scheduling at the
> Application
> > level is really beyond the design goals for standalone mode.  If you want
> > more in the way of multi-Application resource scheduling, then you
> should be
> > looking at Yarn or Mesos.  Is there some reason why neither of those
> options
> > can work for you?
> >
> > On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu <te...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >>
> >>
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
> >> The standalone cluster mode currently only supports a simple FIFO
> >> scheduler across applications.
> >>
> >> is this sentence still true? any progress on this? it will really
> >> helpful. some roadmap?
> >>
> >> Thanks
> >>
> >> Teng
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>


-- 
Michael Gummelt
Software Engineer
Mesosphere

Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

Posted by Teng Qiu <te...@gmail.com>.
Hi Mark, thanks, we just want to keep our system as simple as
possible, using YARN means we need to maintain a full-size hadoop
cluster, we are using s3 as storage layer, so HDFS is not needed, a
hadoop cluster is a little bit overkill, mesos is an option, but
still, it brings extra operation costs.

So... any suggestion from you?

Thanks


2016-07-15 18:51 GMT+02:00 Mark Hamstra <ma...@clearstorydata.com>:
> Nothing has changed in that regard, nor is there likely to be "progress",
> since more sophisticated or capable resource scheduling at the Application
> level is really beyond the design goals for standalone mode.  If you want
> more in the way of multi-Application resource scheduling, then you should be
> looking at Yarn or Mesos.  Is there some reason why neither of those options
> can work for you?
>
> On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu <te...@gmail.com> wrote:
>>
>> Hi,
>>
>>
>> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
>> The standalone cluster mode currently only supports a simple FIFO
>> scheduler across applications.
>>
>> is this sentence still true? any progress on this? it will really
>> helpful. some roadmap?
>>
>> Thanks
>>
>> Teng
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: standalone mode only supports FIFO scheduler across applications ? still in spark 2.0 time ?

Posted by Mark Hamstra <ma...@clearstorydata.com>.
Nothing has changed in that regard, nor is there likely to be "progress",
since more sophisticated or capable resource scheduling at the Application
level is really beyond the design goals for standalone mode.  If you want
more in the way of multi-Application resource scheduling, then you should
be looking at Yarn or Mesos.  Is there some reason why neither of those
options can work for you?

On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu <te...@gmail.com> wrote:

> Hi,
>
>
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
> The standalone cluster mode currently only supports a simple FIFO
> scheduler across applications.
>
> is this sentence still true? any progress on this? it will really
> helpful. some roadmap?
>
> Thanks
>
> Teng
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>