You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Denny Lee <de...@gmail.com> on 2014/12/30 17:25:07 UTC

Spark 1.2 and Mesos 0.21.0 spark.executor.uri issue?

I've been working with Spark 1.2 and Mesos 0.21.0 and while I have set the
spark.executor.uri within spark-env.sh (and directly within bash as well),
the Mesos slaves do not seem to be able to access the spark tgz file via
HTTP or HDFS as per the message below.


14/12/30 15:57:35 INFO SparkILoop: Created spark context..
Spark context available as sc.

scala> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 0 is
now TASK_FAILED
14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now
TASK_FAILED
14/12/30 15:57:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now
TASK_FAILED
14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now
TASK_FAILED
14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Blacklisting Mesos
slave value: "20141228-183059-3045950474-5050-2788-S1"
 due to too many failures; is Spark installed on it?


I've verified that the Mesos slaves can access both the HTTP and HDFS
locations.  I'll start digging into the Mesos logs but was wondering if
anyone had run into this issue before.  I was able to get this to run
successfully on Spark 1.1 on GCP - my current environment that I'm
experimenting with is Digital Ocean - perhaps this is in play?

Thanks!
Denny

Re: Spark 1.2 and Mesos 0.21.0 spark.executor.uri issue?

Posted by Denny Lee <de...@gmail.com>.
After digging through the task logs, i think this may have to do with the
Hadoop distro within the digital ocean mesosphere configuration and Spark
1.2.  The guava incompatibilities were definitely popping up again.

On Wed Dec 31 2014 at 12:36:42 AM Tim Chen <ti...@mesosphere.io> wrote:

> Hi Denny,
>
> What do you see in the task log?
>
> Thanks!
>
> Tim
>
> On Tue, Dec 30, 2014 at 8:25 AM, Denny Lee <de...@gmail.com> wrote:
>
>> I've been working with Spark 1.2 and Mesos 0.21.0 and while I have set
>> the spark.executor.uri within spark-env.sh (and directly within bash as
>> well), the Mesos slaves do not seem to be able to access the spark tgz file
>> via HTTP or HDFS as per the message below.
>>
>>
>> 14/12/30 15:57:35 INFO SparkILoop: Created spark context..
>> Spark context available as sc.
>>
>> scala> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 0
>> is now TASK_FAILED
>> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now
>> TASK_FAILED
>> 14/12/30 15:57:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now
>> TASK_FAILED
>> 14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now
>> TASK_FAILED
>> 14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Blacklisting Mesos
>> slave value: "20141228-183059-3045950474-5050-2788-S1"
>>  due to too many failures; is Spark installed on it?
>>
>>
>> I've verified that the Mesos slaves can access both the HTTP and HDFS
>> locations.  I'll start digging into the Mesos logs but was wondering if
>> anyone had run into this issue before.  I was able to get this to run
>> successfully on Spark 1.1 on GCP - my current environment that I'm
>> experimenting with is Digital Ocean - perhaps this is in play?
>>
>> Thanks!
>> Denny
>>
>>
>
>

Re: Spark 1.2 and Mesos 0.21.0 spark.executor.uri issue?

Posted by Tim Chen <ti...@mesosphere.io>.
Hi Denny,

What do you see in the task log?

Thanks!

Tim

On Tue, Dec 30, 2014 at 8:25 AM, Denny Lee <de...@gmail.com> wrote:

> I've been working with Spark 1.2 and Mesos 0.21.0 and while I have set the
> spark.executor.uri within spark-env.sh (and directly within bash as well),
> the Mesos slaves do not seem to be able to access the spark tgz file via
> HTTP or HDFS as per the message below.
>
>
> 14/12/30 15:57:35 INFO SparkILoop: Created spark context..
> Spark context available as sc.
>
> scala> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 0 is
> now TASK_FAILED
> 14/12/30 15:57:38 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now
> TASK_FAILED
> 14/12/30 15:57:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now
> TASK_FAILED
> 14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now
> TASK_FAILED
> 14/12/30 15:57:41 INFO CoarseMesosSchedulerBackend: Blacklisting Mesos
> slave value: "20141228-183059-3045950474-5050-2788-S1"
>  due to too many failures; is Spark installed on it?
>
>
> I've verified that the Mesos slaves can access both the HTTP and HDFS
> locations.  I'll start digging into the Mesos logs but was wondering if
> anyone had run into this issue before.  I was able to get this to run
> successfully on Spark 1.1 on GCP - my current environment that I'm
> experimenting with is Digital Ocean - perhaps this is in play?
>
> Thanks!
> Denny
>
>