You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mesos.apache.org by SLiZn Liu <sl...@gmail.com> on 2015/09/14 06:10:33 UTC

Spark Job Submitting on Mesos Cluster

Hi Mesos Users,

I’m trying to run Spark jobs on my Mesos cluster. However I discovered that
my Spark job must be submitted by the same user who started Mesos,
otherwise a ExecutorLostFailure will rise, and the job won’t be executed.
Is there anyway that every user share a same Mesos cluster in harmony? =D

BR,
Todd Leo
​

Re: Spark Job Submitting on Mesos Cluster

Posted by SLiZn Liu <sl...@gmail.com>.
No, we set up a specific user to start mesos, it isn't root.

On Mon, Sep 14, 2015 at 1:05 PM haosdent <ha...@gmail.com> wrote:

> Do you start your mesos cluster with root?
>
> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
> wrote:
>
>> Hi Mesos Users,
>>
>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>> that my Spark job must be submitted by the same user who started Mesos,
>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>> executed. Is there anyway that every user share a same Mesos cluster in
>> harmony? =D
>>
>> BR,
>> Todd Leo
>> ​
>>
>
>
>
> --
> Best Regards,
> Haosdent Huang
>

Re: Spark Job Submitting on Mesos Cluster

Posted by zhou weitao <zh...@gmail.com>.
At the same time, make sure SPARK_USER is the real one living on slave
before execute your spark program.

2015-09-14 16:29 GMT+08:00 SLiZn Liu <sl...@gmail.com>:

> I found the --no-switch_user flag in mesos slave configuration. Will give
> it a try. Thanks Tim, and haosdent !
> ​
>
> On Mon, Sep 14, 2015 at 4:15 PM haosdent <ha...@gmail.com> wrote:
>
>> > turn off --switch-user flag in the Mesos slave
>> --no-switch_user :-)
>>
>> On Mon, Sep 14, 2015 at 4:03 PM, Tim Chen <ti...@mesosphere.io> wrote:
>>
>>> Actually --proxy-user is more about which user you're impersonated to
>>> run the driver, but not the user that is going to be passed to Mesos to run
>>> as.
>>>
>>> The way to use a partciular user when running a spark job is to set the
>>> SPARK_USER environment variable, and that user will be passed to Mesos.
>>>
>>> Atlernatively you can also turn off --switch-user flag in the Mesos
>>> slave so that all jobs will just use the Slave's current user.
>>>
>>> Tim
>>>
>>> On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sl...@gmail.com>
>>> wrote:
>>>
>>>> Thx Tommy, did you mean add proxy user like this:
>>>>
>>>> spark-submit --proxy-user <MESOS-STARTER> ...
>>>>
>>>> where represents the user who started Mesos?
>>>>
>>>> and is this parameter documented anywhere?
>>>> ​
>>>>
>>>> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xi...@gmail.com> wrote:
>>>>
>>>>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>>>>> should have the proxy_user in the /etc/passwd in every node.
>>>>>
>>>>> 2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:
>>>>>
>>>>>> Do you start your mesos cluster with root?
>>>>>>
>>>>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Mesos Users,
>>>>>>>
>>>>>>> I’m trying to run Spark jobs on my Mesos cluster. However I
>>>>>>> discovered that my Spark job must be submitted by the same user who started
>>>>>>> Mesos, otherwise a ExecutorLostFailure will rise, and the job won’t
>>>>>>> be executed. Is there anyway that every user share a same Mesos cluster in
>>>>>>> harmony? =D
>>>>>>>
>>>>>>> BR,
>>>>>>> Todd Leo
>>>>>>> ​
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Haosdent Huang
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Deshi Xiao
>>>>> Twitter: xds2000
>>>>> E-mail: xiaods(AT)gmail.com
>>>>>
>>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>

Re: Spark Job Submitting on Mesos Cluster

Posted by Tim Chen <ti...@mesosphere.io>.
Thanks Haosdent!

Tim

On Mon, Sep 14, 2015 at 1:29 AM, SLiZn Liu <sl...@gmail.com> wrote:

> I found the --no-switch_user flag in mesos slave configuration. Will give
> it a try. Thanks Tim, and haosdent !
> ​
>
> On Mon, Sep 14, 2015 at 4:15 PM haosdent <ha...@gmail.com> wrote:
>
>> > turn off --switch-user flag in the Mesos slave
>> --no-switch_user :-)
>>
>> On Mon, Sep 14, 2015 at 4:03 PM, Tim Chen <ti...@mesosphere.io> wrote:
>>
>>> Actually --proxy-user is more about which user you're impersonated to
>>> run the driver, but not the user that is going to be passed to Mesos to run
>>> as.
>>>
>>> The way to use a partciular user when running a spark job is to set the
>>> SPARK_USER environment variable, and that user will be passed to Mesos.
>>>
>>> Atlernatively you can also turn off --switch-user flag in the Mesos
>>> slave so that all jobs will just use the Slave's current user.
>>>
>>> Tim
>>>
>>> On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sl...@gmail.com>
>>> wrote:
>>>
>>>> Thx Tommy, did you mean add proxy user like this:
>>>>
>>>> spark-submit --proxy-user <MESOS-STARTER> ...
>>>>
>>>> where represents the user who started Mesos?
>>>>
>>>> and is this parameter documented anywhere?
>>>> ​
>>>>
>>>> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xi...@gmail.com> wrote:
>>>>
>>>>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>>>>> should have the proxy_user in the /etc/passwd in every node.
>>>>>
>>>>> 2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:
>>>>>
>>>>>> Do you start your mesos cluster with root?
>>>>>>
>>>>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Mesos Users,
>>>>>>>
>>>>>>> I’m trying to run Spark jobs on my Mesos cluster. However I
>>>>>>> discovered that my Spark job must be submitted by the same user who started
>>>>>>> Mesos, otherwise a ExecutorLostFailure will rise, and the job won’t
>>>>>>> be executed. Is there anyway that every user share a same Mesos cluster in
>>>>>>> harmony? =D
>>>>>>>
>>>>>>> BR,
>>>>>>> Todd Leo
>>>>>>> ​
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Haosdent Huang
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Deshi Xiao
>>>>> Twitter: xds2000
>>>>> E-mail: xiaods(AT)gmail.com
>>>>>
>>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>

Re: Spark Job Submitting on Mesos Cluster

Posted by SLiZn Liu <sl...@gmail.com>.
I found the --no-switch_user flag in mesos slave configuration. Will give
it a try. Thanks Tim, and haosdent !
​

On Mon, Sep 14, 2015 at 4:15 PM haosdent <ha...@gmail.com> wrote:

> > turn off --switch-user flag in the Mesos slave
> --no-switch_user :-)
>
> On Mon, Sep 14, 2015 at 4:03 PM, Tim Chen <ti...@mesosphere.io> wrote:
>
>> Actually --proxy-user is more about which user you're impersonated to run
>> the driver, but not the user that is going to be passed to Mesos to run as.
>>
>> The way to use a partciular user when running a spark job is to set the
>> SPARK_USER environment variable, and that user will be passed to Mesos.
>>
>> Atlernatively you can also turn off --switch-user flag in the Mesos slave
>> so that all jobs will just use the Slave's current user.
>>
>> Tim
>>
>> On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sl...@gmail.com>
>> wrote:
>>
>>> Thx Tommy, did you mean add proxy user like this:
>>>
>>> spark-submit --proxy-user <MESOS-STARTER> ...
>>>
>>> where represents the user who started Mesos?
>>>
>>> and is this parameter documented anywhere?
>>> ​
>>>
>>> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xi...@gmail.com> wrote:
>>>
>>>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>>>> should have the proxy_user in the /etc/passwd in every node.
>>>>
>>>> 2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:
>>>>
>>>>> Do you start your mesos cluster with root?
>>>>>
>>>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Mesos Users,
>>>>>>
>>>>>> I’m trying to run Spark jobs on my Mesos cluster. However I
>>>>>> discovered that my Spark job must be submitted by the same user who started
>>>>>> Mesos, otherwise a ExecutorLostFailure will rise, and the job won’t
>>>>>> be executed. Is there anyway that every user share a same Mesos cluster in
>>>>>> harmony? =D
>>>>>>
>>>>>> BR,
>>>>>> Todd Leo
>>>>>> ​
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Haosdent Huang
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Deshi Xiao
>>>> Twitter: xds2000
>>>> E-mail: xiaods(AT)gmail.com
>>>>
>>>
>>
>
>
> --
> Best Regards,
> Haosdent Huang
>

Re: Spark Job Submitting on Mesos Cluster

Posted by haosdent <ha...@gmail.com>.
> turn off --switch-user flag in the Mesos slave
--no-switch_user :-)

On Mon, Sep 14, 2015 at 4:03 PM, Tim Chen <ti...@mesosphere.io> wrote:

> Actually --proxy-user is more about which user you're impersonated to run
> the driver, but not the user that is going to be passed to Mesos to run as.
>
> The way to use a partciular user when running a spark job is to set the
> SPARK_USER environment variable, and that user will be passed to Mesos.
>
> Atlernatively you can also turn off --switch-user flag in the Mesos slave
> so that all jobs will just use the Slave's current user.
>
> Tim
>
> On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sl...@gmail.com>
> wrote:
>
>> Thx Tommy, did you mean add proxy user like this:
>>
>> spark-submit --proxy-user <MESOS-STARTER> ...
>>
>> where represents the user who started Mesos?
>>
>> and is this parameter documented anywhere?
>> ​
>>
>> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xi...@gmail.com> wrote:
>>
>>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>>> should have the proxy_user in the /etc/passwd in every node.
>>>
>>> 2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:
>>>
>>>> Do you start your mesos cluster with root?
>>>>
>>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Mesos Users,
>>>>>
>>>>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>>>>> that my Spark job must be submitted by the same user who started Mesos,
>>>>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>>>>> executed. Is there anyway that every user share a same Mesos cluster in
>>>>> harmony? =D
>>>>>
>>>>> BR,
>>>>> Todd Leo
>>>>> ​
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Haosdent Huang
>>>>
>>>
>>>
>>>
>>> --
>>> Deshi Xiao
>>> Twitter: xds2000
>>> E-mail: xiaods(AT)gmail.com
>>>
>>
>


-- 
Best Regards,
Haosdent Huang

Re: Spark Job Submitting on Mesos Cluster

Posted by Tim Chen <ti...@mesosphere.io>.
Actually --proxy-user is more about which user you're impersonated to run
the driver, but not the user that is going to be passed to Mesos to run as.

The way to use a partciular user when running a spark job is to set the
SPARK_USER environment variable, and that user will be passed to Mesos.

Atlernatively you can also turn off --switch-user flag in the Mesos slave
so that all jobs will just use the Slave's current user.

Tim

On Sun, Sep 13, 2015 at 11:20 PM, SLiZn Liu <sl...@gmail.com> wrote:

> Thx Tommy, did you mean add proxy user like this:
>
> spark-submit --proxy-user <MESOS-STARTER> ...
>
> where represents the user who started Mesos?
>
> and is this parameter documented anywhere?
> ​
>
> On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xi...@gmail.com> wrote:
>
>> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
>> should have the proxy_user in the /etc/passwd in every node.
>>
>> 2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:
>>
>>> Do you start your mesos cluster with root?
>>>
>>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
>>> wrote:
>>>
>>>> Hi Mesos Users,
>>>>
>>>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>>>> that my Spark job must be submitted by the same user who started Mesos,
>>>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>>>> executed. Is there anyway that every user share a same Mesos cluster in
>>>> harmony? =D
>>>>
>>>> BR,
>>>> Todd Leo
>>>> ​
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Haosdent Huang
>>>
>>
>>
>>
>> --
>> Deshi Xiao
>> Twitter: xds2000
>> E-mail: xiaods(AT)gmail.com
>>
>

Re: Spark Job Submitting on Mesos Cluster

Posted by SLiZn Liu <sl...@gmail.com>.
Thx Tommy, did you mean add proxy user like this:

spark-submit --proxy-user <MESOS-STARTER> ...

where represents the user who started Mesos?

and is this parameter documented anywhere?
​

On Mon, Sep 14, 2015 at 1:34 PM tommy xiao <xi...@gmail.com> wrote:

> @SLiZn Liu  yes, you need add proxy_user parameter and your cluster
> should have the proxy_user in the /etc/passwd in every node.
>
> 2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:
>
>> Do you start your mesos cluster with root?
>>
>> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
>> wrote:
>>
>>> Hi Mesos Users,
>>>
>>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>>> that my Spark job must be submitted by the same user who started Mesos,
>>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>>> executed. Is there anyway that every user share a same Mesos cluster in
>>> harmony? =D
>>>
>>> BR,
>>> Todd Leo
>>> ​
>>>
>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>
>
>
> --
> Deshi Xiao
> Twitter: xds2000
> E-mail: xiaods(AT)gmail.com
>

Re: Spark Job Submitting on Mesos Cluster

Posted by tommy xiao <xi...@gmail.com>.
@SLiZn Liu  yes, you need add proxy_user parameter and your cluster should
have the proxy_user in the /etc/passwd in every node.

2015-09-14 13:05 GMT+08:00 haosdent <ha...@gmail.com>:

> Do you start your mesos cluster with root?
>
> On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com>
> wrote:
>
>> Hi Mesos Users,
>>
>> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
>> that my Spark job must be submitted by the same user who started Mesos,
>> otherwise a ExecutorLostFailure will rise, and the job won’t be
>> executed. Is there anyway that every user share a same Mesos cluster in
>> harmony? =D
>>
>> BR,
>> Todd Leo
>> ​
>>
>
>
>
> --
> Best Regards,
> Haosdent Huang
>



-- 
Deshi Xiao
Twitter: xds2000
E-mail: xiaods(AT)gmail.com

Re: Spark Job Submitting on Mesos Cluster

Posted by haosdent <ha...@gmail.com>.
Do you start your mesos cluster with root?

On Mon, Sep 14, 2015 at 12:10 PM, SLiZn Liu <sl...@gmail.com> wrote:

> Hi Mesos Users,
>
> I’m trying to run Spark jobs on my Mesos cluster. However I discovered
> that my Spark job must be submitted by the same user who started Mesos,
> otherwise a ExecutorLostFailure will rise, and the job won’t be executed.
> Is there anyway that every user share a same Mesos cluster in harmony? =D
>
> BR,
> Todd Leo
> ​
>



-- 
Best Regards,
Haosdent Huang