You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mesos.apache.org by Scott Kinney <sc...@stem.com> on 2016/05/26 03:31:08 UTC

Hadoop install location to use s3 uri

I want to use the s3 uri but i guess i need hadoop on the slave. I've unpacked the hadoop tar ball and added 'HADOOP_HOME=/path/to/unpacked/hadoop' to marathons app definition's environment but mesos still says it can't find hadoop.


Failed to fetch 's3n://bucket/docker.tar.gz': Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127


Also, is the s3 uri correct? s3n://bucketname/keyname ?


Thanks!

________________________________
Scott Kinney | DevOps
stem <http://www.stem.com/>   |   m  510.282.1299
100 Rollins Road, Millbrae, California 94030

This e-mail and/or any attachments contain Stem, Inc. confidential and proprietary information and material for the sole use of the intended recipient(s). Any review, use or distribution that has not been expressly authorized by Stem, Inc. is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. Thank you.

Re: Hadoop install location to use s3 uri

Posted by haosdent <ha...@gmail.com>.
Hi, @Scott, sorry to make you confuse. Pradeep's answer works according to
my test. Thanks @Pradeep's help!

On Thu, May 26, 2016 at 1:08 PM, Pradeep Chhetri <
pradeep.chhetri89@gmail.com> wrote:

> Hi Scott,
>
> I think setting the HADOOP_HOME env variable in application definition
> will not work. You need to set the environment variable in such a way so
> that mesos slave process can see it.
>
> In order to achieve that, you have two options:
>
> * Either, pass the --hadoop_home flag while starting the mesos slave
> daemon.
> * Or, set the environment variable HADOOP_HOME before starting mesos slave
> daemon so that it can refer it.
>
> Let us know how it goes.
>
> On Thu, May 26, 2016 at 9:43 AM, Scott Kinney <sc...@stem.com>
> wrote:
>
>> Here is my app def:
>>
>> https://gist.github.com/skinney6/a63ff7f0f8311faaabaf0399702a403f
>>
>>
>>
>>
>> ------------------------------
>> Scott Kinney | DevOps
>> stem  <http://www.stem.com/>  |   *m*  510.282.1299
>> 100 Rollins Road, Millbrae, California 94030
>>
>> This e-mail and/or any attachments contain Stem, Inc. confidential and
>> proprietary information and material for the sole use of the intended
>> recipient(s). Any review, use or distribution that has not been expressly
>> authorized by Stem, Inc. is strictly prohibited. If you are not the
>> intended recipient, please contact the sender and delete all copies. Thank
>> you.
>> ------------------------------
>> *From:* haosdent <ha...@gmail.com>
>> *Sent:* Wednesday, May 25, 2016 8:42 PM
>> *To:* user
>> *Subject:* Re: Hadoop install location to use s3 uri
>>
>> It looks like could not real the HADOOP_HOME correctly. Otherwise the
>> error message would be "/path/to/unpacked/hadoop/bin/hadoop version 2>&1".
>> May you show your Marathon application definition?
>>
>> On Thu, May 26, 2016 at 11:31 AM, Scott Kinney <sc...@stem.com>
>> wrote:
>>
>>> I want to use the s3 uri but i guess i need hadoop on the slave. I've
>>> unpacked the hadoop tar ball and
>>> added 'HADOOP_HOME=/path/to/unpacked/hadoop' to marathons app definition's
>>> environment but mesos still says it can't find hadoop.
>>>
>>>
>>> Failed to fetch 's3n://bucket/docker.tar.gz': Failed to create HDFS
>>> client: Failed to execute 'hadoop version 2>&1'; the command was either not
>>> found or exited with a non-zero exit status: 127
>>>
>>>
>>> Also, is the s3 uri correct? s3n://bucketname/keyname ?
>>>
>>>
>>> Thanks!
>>>
>>> ------------------------------
>>> Scott Kinney | DevOps
>>> stem  <http://www.stem.com/>  |   *m*  510.282.1299
>>> 100 Rollins Road, Millbrae, California 94030
>>>
>>> This e-mail and/or any attachments contain Stem, Inc. confidential and
>>> proprietary information and material for the sole use of the intended
>>> recipient(s). Any review, use or distribution that has not been expressly
>>> authorized by Stem, Inc. is strictly prohibited. If you are not the
>>> intended recipient, please contact the sender and delete all copies. Thank
>>> you.
>>>
>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>
>
>
> --
> Regards,
> Pradeep Chhetri
>



-- 
Best Regards,
Haosdent Huang

Re: Hadoop install location to use s3 uri

Posted by Pradeep Chhetri <pr...@gmail.com>.
Hi Scott,

I think setting the HADOOP_HOME env variable in application definition will
not work. You need to set the environment variable in such a way so that
mesos slave process can see it.

In order to achieve that, you have two options:

* Either, pass the --hadoop_home flag while starting the mesos slave daemon.
* Or, set the environment variable HADOOP_HOME before starting mesos slave
daemon so that it can refer it.

Let us know how it goes.

On Thu, May 26, 2016 at 9:43 AM, Scott Kinney <sc...@stem.com> wrote:

> Here is my app def:
>
> https://gist.github.com/skinney6/a63ff7f0f8311faaabaf0399702a403f
>
>
>
>
> ------------------------------
> Scott Kinney | DevOps
> stem  <http://www.stem.com/>  |   *m*  510.282.1299
> 100 Rollins Road, Millbrae, California 94030
>
> This e-mail and/or any attachments contain Stem, Inc. confidential and
> proprietary information and material for the sole use of the intended
> recipient(s). Any review, use or distribution that has not been expressly
> authorized by Stem, Inc. is strictly prohibited. If you are not the
> intended recipient, please contact the sender and delete all copies. Thank
> you.
> ------------------------------
> *From:* haosdent <ha...@gmail.com>
> *Sent:* Wednesday, May 25, 2016 8:42 PM
> *To:* user
> *Subject:* Re: Hadoop install location to use s3 uri
>
> It looks like could not real the HADOOP_HOME correctly. Otherwise the
> error message would be "/path/to/unpacked/hadoop/bin/hadoop version 2>&1".
> May you show your Marathon application definition?
>
> On Thu, May 26, 2016 at 11:31 AM, Scott Kinney <sc...@stem.com>
> wrote:
>
>> I want to use the s3 uri but i guess i need hadoop on the slave. I've
>> unpacked the hadoop tar ball and
>> added 'HADOOP_HOME=/path/to/unpacked/hadoop' to marathons app definition's
>> environment but mesos still says it can't find hadoop.
>>
>>
>> Failed to fetch 's3n://bucket/docker.tar.gz': Failed to create HDFS
>> client: Failed to execute 'hadoop version 2>&1'; the command was either not
>> found or exited with a non-zero exit status: 127
>>
>>
>> Also, is the s3 uri correct? s3n://bucketname/keyname ?
>>
>>
>> Thanks!
>>
>> ------------------------------
>> Scott Kinney | DevOps
>> stem  <http://www.stem.com/>  |   *m*  510.282.1299
>> 100 Rollins Road, Millbrae, California 94030
>>
>> This e-mail and/or any attachments contain Stem, Inc. confidential and
>> proprietary information and material for the sole use of the intended
>> recipient(s). Any review, use or distribution that has not been expressly
>> authorized by Stem, Inc. is strictly prohibited. If you are not the
>> intended recipient, please contact the sender and delete all copies. Thank
>> you.
>>
>
>
>
> --
> Best Regards,
> Haosdent Huang
>



-- 
Regards,
Pradeep Chhetri

Re: Hadoop install location to use s3 uri

Posted by Scott Kinney <sc...@stem.com>.
Here is my app def:

https://gist.github.com/skinney6/a63ff7f0f8311faaabaf0399702a403f



________________________________
Scott Kinney | DevOps
stem <http://www.stem.com/>   |   m  510.282.1299
100 Rollins Road, Millbrae, California 94030

This e-mail and/or any attachments contain Stem, Inc. confidential and proprietary information and material for the sole use of the intended recipient(s). Any review, use or distribution that has not been expressly authorized by Stem, Inc. is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. Thank you.
________________________________
From: haosdent <ha...@gmail.com>
Sent: Wednesday, May 25, 2016 8:42 PM
To: user
Subject: Re: Hadoop install location to use s3 uri

It looks like could not real the HADOOP_HOME correctly. Otherwise the error message would be "/path/to/unpacked/hadoop/bin/hadoop version 2>&1". May you show your Marathon application definition?

On Thu, May 26, 2016 at 11:31 AM, Scott Kinney <sc...@stem.com>> wrote:

I want to use the s3 uri but i guess i need hadoop on the slave. I've unpacked the hadoop tar ball and added 'HADOOP_HOME=/path/to/unpacked/hadoop' to marathons app definition's environment but mesos still says it can't find hadoop.


Failed to fetch 's3n://bucket/docker.tar.gz': Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was either not found or exited with a non-zero exit status: 127


Also, is the s3 uri correct? s3n://bucketname/keyname ?


Thanks!

________________________________
Scott Kinney | DevOps
stem <http://www.stem.com/>   |   m  510.282.1299<tel:510.282.1299>
100 Rollins Road, Millbrae, California 94030

This e-mail and/or any attachments contain Stem, Inc. confidential and proprietary information and material for the sole use of the intended recipient(s). Any review, use or distribution that has not been expressly authorized by Stem, Inc. is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. Thank you.



--
Best Regards,
Haosdent Huang

Re: Hadoop install location to use s3 uri

Posted by haosdent <ha...@gmail.com>.
It looks like could not real the HADOOP_HOME correctly. Otherwise the error
message would be "/path/to/unpacked/hadoop/bin/hadoop version 2>&1". May
you show your Marathon application definition?

On Thu, May 26, 2016 at 11:31 AM, Scott Kinney <sc...@stem.com>
wrote:

> I want to use the s3 uri but i guess i need hadoop on the slave. I've
> unpacked the hadoop tar ball and
> added 'HADOOP_HOME=/path/to/unpacked/hadoop' to marathons app definition's
> environment but mesos still says it can't find hadoop.
>
>
> Failed to fetch 's3n://bucket/docker.tar.gz': Failed to create HDFS
> client: Failed to execute 'hadoop version 2>&1'; the command was either not
> found or exited with a non-zero exit status: 127
>
>
> Also, is the s3 uri correct? s3n://bucketname/keyname ?
>
>
> Thanks!
>
> ------------------------------
> Scott Kinney | DevOps
> stem  <http://www.stem.com/>  |   *m*  510.282.1299
> 100 Rollins Road, Millbrae, California 94030
>
> This e-mail and/or any attachments contain Stem, Inc. confidential and
> proprietary information and material for the sole use of the intended
> recipient(s). Any review, use or distribution that has not been expressly
> authorized by Stem, Inc. is strictly prohibited. If you are not the
> intended recipient, please contact the sender and delete all copies. Thank
> you.
>



-- 
Best Regards,
Haosdent Huang