You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by John Omernik <jo...@omernik.com> on 2014/09/20 19:16:09 UTC

SparkSQL Thriftserver in Mesos

I am running the Thrift server in SparkSQL, and running it on the node I
compiled spark on.  When I run it, tasks only work if they landed on that
node, other executors started on nodes I didn't compile spark on (and thus
don't have the compile directory) fail.  Should spark be distributed
properly with the executor uri in my spark-defaults for mesos?

Here is the error on nodes with Lost executors

sh: 1: /opt/mapr/spark/spark-1.1.0-SNAPSHOT/sbin/spark-executor: not found

Re: SparkSQL Thriftserver in Mesos

Posted by Cheng Lian <li...@gmail.com>.
You can avoid install Spark on each node by uploading Spark distribution 
tarball file to HDFS setting |spark.executor.uri| to the HDFS location. 
In this way, Mesos will download and the tarball file before launching 
containers. Please refer to this Spark documentation page 
<http://spark.apache.org/docs/latest/running-on-mesos.html> for details.

However, using |spark.executor.uri| together with fine-grained mode 
(which is the default mode) really kills performance, because Mesos 
downloads and extracts the tarball every time a Spark /task/ (not 
application) is launched.

On 9/21/14 1:16 AM, John Omernik wrote:

> I am running the Thrift server in SparkSQL, and running it on the node 
> I compiled spark on.  When I run it, tasks only work if they landed on 
> that node, other executors started on nodes I didn't compile spark on 
> (and thus don't have the compile directory) fail.  Should spark be 
> distributed properly with the executor uri in my spark-defaults for 
> mesos?
>
> Here is the error on nodes with Lost executors
>
> sh: 1: /opt/mapr/spark/spark-1.1.0-SNAPSHOT/sbin/spark-executor: not found

​

Re: SparkSQL Thriftserver in Mesos

Posted by Dean Wampler <de...@gmail.com>.
The Mesos install guide says this:

"To use Mesos from Spark, you need a Spark binary package available in a
place accessible by Mesos, and a Spark driver program configured to connect
to Mesos."

For example, putting it in HDFS or copying it to each node in the same
location should do the trick.

https://spark.apache.org/docs/latest/running-on-mesos.html



Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Mon, Sep 22, 2014 at 2:35 PM, John Omernik <jo...@omernik.com> wrote:

> Any thoughts on this?
>
> On Sat, Sep 20, 2014 at 12:16 PM, John Omernik <jo...@omernik.com> wrote:
>
>> I am running the Thrift server in SparkSQL, and running it on the node I
>> compiled spark on.  When I run it, tasks only work if they landed on that
>> node, other executors started on nodes I didn't compile spark on (and thus
>> don't have the compile directory) fail.  Should spark be distributed
>> properly with the executor uri in my spark-defaults for mesos?
>>
>> Here is the error on nodes with Lost executors
>>
>> sh: 1: /opt/mapr/spark/spark-1.1.0-SNAPSHOT/sbin/spark-executor: not found
>>
>>
>>
>

Re: SparkSQL Thriftserver in Mesos

Posted by John Omernik <jo...@omernik.com>.
Any thoughts on this?

On Sat, Sep 20, 2014 at 12:16 PM, John Omernik <jo...@omernik.com> wrote:

> I am running the Thrift server in SparkSQL, and running it on the node I
> compiled spark on.  When I run it, tasks only work if they landed on that
> node, other executors started on nodes I didn't compile spark on (and thus
> don't have the compile directory) fail.  Should spark be distributed
> properly with the executor uri in my spark-defaults for mesos?
>
> Here is the error on nodes with Lost executors
>
> sh: 1: /opt/mapr/spark/spark-1.1.0-SNAPSHOT/sbin/spark-executor: not found
>
>
>