You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Marco Costantini <si...@granatads.com> on 2014/04/07 20:14:45 UTC

AWS Spark-ec2 script with different user

Hi all,
On the old Amazon Linux EC2 images, the user 'root' was enabled for ssh.
Also, it is the default user for the Spark-EC2 script.

Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
instead of 'root'.

I can see that the Spark-EC2 script allows you to specify which user to log
in with, but even when I change this, the script fails for various reasons.
And the output SEEMS that the script is still based on the specified user's
home directory being '/root'.

Am I using this script wrong?
Has anyone had success with this 'ec2-user' user?
Any ideas?

Please and thank you,
Marco.

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
Perfect. Now I know what to do. Thanks to your help!

Many thanks,
Marco.


On Wed, Apr 9, 2014 at 12:27 PM, Shivaram Venkataraman <
shivaram@eecs.berkeley.edu> wrote:

> The AMI should automatically switch between PVM and HVM based on the
> instance type you specify on the command line. For reference (note you
> don't need to specify this on the command line), the PVM ami id
> is ami-5bb18832 in us-east-1.
>
> FWIW we maintain the list of AMI Ids (across regions and pvm, hvm) at
> https://github.com/mesos/spark-ec2/tree/v2/ami-list
>
> Thanks
> Shivaram
>
>
> On Wed, Apr 9, 2014 at 9:12 AM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> Ah, tried that. I believe this is an HVM AMI? We are exploring
>> paravirtual AMIs.
>>
>>
>> On Wed, Apr 9, 2014 at 11:17 AM, Nicholas Chammas <
>> nicholas.chammas@gmail.com> wrote:
>>
>>> And for the record, that AMI is ami-35b1885c. Again, you don't need to
>>> specify it explicitly; spark-ec2 will default to it.
>>>
>>>
>>> On Wed, Apr 9, 2014 at 11:08 AM, Nicholas Chammas <
>>> nicholas.chammas@gmail.com> wrote:
>>>
>>>> Marco,
>>>>
>>>> If you call spark-ec2 launch without specifying an AMI, it will default
>>>> to the Spark-provided AMI.
>>>>
>>>> Nick
>>>>
>>>>
>>>> On Wed, Apr 9, 2014 at 9:43 AM, Marco Costantini <
>>>> silvio.costantini@granatads.com> wrote:
>>>>
>>>>> Hi there,
>>>>> To answer your question; no there is no reason NOT to use an AMI that
>>>>> Spark has prepared. The reason we haven't is that we were not aware such
>>>>> AMIs existed. Would you kindly point us to the documentation where we can
>>>>> read about this further?
>>>>>
>>>>> Many many thanks, Shivaram.
>>>>> Marco.
>>>>>
>>>>>
>>>>> On Tue, Apr 8, 2014 at 4:42 PM, Shivaram Venkataraman <
>>>>> shivaram@eecs.berkeley.edu> wrote:
>>>>>
>>>>>> Is there any reason why you want to start with a vanilla amazon AMI
>>>>>> rather than the ones we build and provide as a part of Spark EC2 scripts ?
>>>>>> The AMIs we provide are close to the vanilla AMI but have the root account
>>>>>> setup properly and install packages like java that are used by Spark.
>>>>>>
>>>>>> If you wish to customize the AMI, you could always start with our AMI
>>>>>> and add more packages you like -- I have definitely done this recently and
>>>>>> it works with HVM and PVM as far as I can tell.
>>>>>>
>>>>>> Shivaram
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>
>>>>>>> I was able to keep the "workaround" ...around... by overwriting the
>>>>>>> generated '/root/.ssh/authorized_keys' file with a known good one, in the
>>>>>>> '/etc/rc.local' file
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>
>>>>>>>> Another thing I didn't mention. The AMI and user used: naturally
>>>>>>>> I've created several of my own AMIs with the following characteristics.
>>>>>>>> None of which worked.
>>>>>>>>
>>>>>>>> 1) Enabling ssh as root as per this guide (
>>>>>>>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>>>>>>>> When doing this, I do not specify a user for the spark-ec2 script. What
>>>>>>>> happens is that, it works! But only while it's alive. If I stop the
>>>>>>>> instance, create an AMI, and launch a new instance based from the new AMI,
>>>>>>>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>>>>>>>
>>>>>>>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>>>>>>>> ec2-user does not have to use sudo to perform any operations needing root
>>>>>>>> privilidges. When doing this, I specify the user 'ec2-user' for the
>>>>>>>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>>>>>>>
>>>>>>>> I believe HVMs still work. But it would be valuable to the
>>>>>>>> community to know that the root user work-around does/doesn't work any more
>>>>>>>> for paravirtual instances.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Marco.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>
>>>>>>>>> As requested, here is the script I am running. It is a simple
>>>>>>>>> shell script which calls spark-ec2 wrapper script. I execute it from the
>>>>>>>>> 'ec2' directory of spark, as usual. The AMI used is the raw one from the
>>>>>>>>> AWS Quick Start section. It is the first option (an Amazon Linux
>>>>>>>>> paravirtual image). Any ideas or confirmation would be GREATLY appreciated.
>>>>>>>>> Please and thank you.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> #!/bin/sh
>>>>>>>>>
>>>>>>>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>>>>>>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>>>>>>>
>>>>>>>>> AMI_ID=ami-2f726546
>>>>>>>>>
>>>>>>>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user
>>>>>>>>> -s 10 -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>>>>>>>> launch marcotest
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hmm -- That is strange. Can you paste the command you are using
>>>>>>>>>> to launch the instances ? The typical workflow is to use the spark-ec2
>>>>>>>>>> wrapper script using the guidelines at
>>>>>>>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>>>>>>>
>>>>>>>>>> Shivaram
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Shivaram,
>>>>>>>>>>>
>>>>>>>>>>> OK so let's assume the script CANNOT take a different user and
>>>>>>>>>>> that it must be 'root'. The typical workaround is as you said, allow the
>>>>>>>>>>> ssh with the root user. Now, don't laugh, but, this worked last Friday, but
>>>>>>>>>>> today (Monday) it no longer works. :D Why? ...
>>>>>>>>>>>
>>>>>>>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the
>>>>>>>>>>> root user's 'authorized_keys' file is always overwritten. This means the
>>>>>>>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>>>>>>>
>>>>>>>>>>> Just to point out, I am trying to make this work with a
>>>>>>>>>>> paravirtual instance and not an HVM instance.
>>>>>>>>>>>
>>>>>>>>>>> Please and thanks,
>>>>>>>>>>> Marco.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Right now the spark-ec2 scripts assume that you have root
>>>>>>>>>>>> access and a lot of internal scripts assume have the user's home directory
>>>>>>>>>>>> hard coded as /root.   However all the Spark AMIs we build should have root
>>>>>>>>>>>> ssh access -- Do you find this not to be the case ?
>>>>>>>>>>>>
>>>>>>>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>>>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> Shivaram
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was
>>>>>>>>>>>>> enabled for ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up
>>>>>>>>>>>>> for ssh instead of 'root'.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I can see that the Spark-EC2 script allows you to specify
>>>>>>>>>>>>> which user to log in with, but even when I change this, the script fails
>>>>>>>>>>>>> for various reasons. And the output SEEMS that the script is still based on
>>>>>>>>>>>>> the specified user's home directory being '/root'.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Am I using this script wrong?
>>>>>>>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>>>>>>>> Any ideas?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Please and thank you,
>>>>>>>>>>>>> Marco.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Shivaram Venkataraman <sh...@eecs.berkeley.edu>.
The AMI should automatically switch between PVM and HVM based on the
instance type you specify on the command line. For reference (note you
don't need to specify this on the command line), the PVM ami id
is ami-5bb18832 in us-east-1.

FWIW we maintain the list of AMI Ids (across regions and pvm, hvm) at
https://github.com/mesos/spark-ec2/tree/v2/ami-list

Thanks
Shivaram


On Wed, Apr 9, 2014 at 9:12 AM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> Ah, tried that. I believe this is an HVM AMI? We are exploring paravirtual
> AMIs.
>
>
> On Wed, Apr 9, 2014 at 11:17 AM, Nicholas Chammas <
> nicholas.chammas@gmail.com> wrote:
>
>> And for the record, that AMI is ami-35b1885c. Again, you don't need to
>> specify it explicitly; spark-ec2 will default to it.
>>
>>
>> On Wed, Apr 9, 2014 at 11:08 AM, Nicholas Chammas <
>> nicholas.chammas@gmail.com> wrote:
>>
>>> Marco,
>>>
>>> If you call spark-ec2 launch without specifying an AMI, it will default
>>> to the Spark-provided AMI.
>>>
>>> Nick
>>>
>>>
>>> On Wed, Apr 9, 2014 at 9:43 AM, Marco Costantini <
>>> silvio.costantini@granatads.com> wrote:
>>>
>>>> Hi there,
>>>> To answer your question; no there is no reason NOT to use an AMI that
>>>> Spark has prepared. The reason we haven't is that we were not aware such
>>>> AMIs existed. Would you kindly point us to the documentation where we can
>>>> read about this further?
>>>>
>>>> Many many thanks, Shivaram.
>>>> Marco.
>>>>
>>>>
>>>> On Tue, Apr 8, 2014 at 4:42 PM, Shivaram Venkataraman <
>>>> shivaram@eecs.berkeley.edu> wrote:
>>>>
>>>>> Is there any reason why you want to start with a vanilla amazon AMI
>>>>> rather than the ones we build and provide as a part of Spark EC2 scripts ?
>>>>> The AMIs we provide are close to the vanilla AMI but have the root account
>>>>> setup properly and install packages like java that are used by Spark.
>>>>>
>>>>> If you wish to customize the AMI, you could always start with our AMI
>>>>> and add more packages you like -- I have definitely done this recently and
>>>>> it works with HVM and PVM as far as I can tell.
>>>>>
>>>>> Shivaram
>>>>>
>>>>>
>>>>> On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
>>>>> silvio.costantini@granatads.com> wrote:
>>>>>
>>>>>> I was able to keep the "workaround" ...around... by overwriting the
>>>>>> generated '/root/.ssh/authorized_keys' file with a known good one, in the
>>>>>> '/etc/rc.local' file
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>
>>>>>>> Another thing I didn't mention. The AMI and user used: naturally
>>>>>>> I've created several of my own AMIs with the following characteristics.
>>>>>>> None of which worked.
>>>>>>>
>>>>>>> 1) Enabling ssh as root as per this guide (
>>>>>>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>>>>>>> When doing this, I do not specify a user for the spark-ec2 script. What
>>>>>>> happens is that, it works! But only while it's alive. If I stop the
>>>>>>> instance, create an AMI, and launch a new instance based from the new AMI,
>>>>>>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>>>>>>
>>>>>>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>>>>>>> ec2-user does not have to use sudo to perform any operations needing root
>>>>>>> privilidges. When doing this, I specify the user 'ec2-user' for the
>>>>>>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>>>>>>
>>>>>>> I believe HVMs still work. But it would be valuable to the community
>>>>>>> to know that the root user work-around does/doesn't work any more for
>>>>>>> paravirtual instances.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Marco.
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>
>>>>>>>> As requested, here is the script I am running. It is a simple shell
>>>>>>>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>>>>>>>> directory of spark, as usual. The AMI used is the raw one from the AWS
>>>>>>>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>>>>>>>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>>>>>>>> thank you.
>>>>>>>>
>>>>>>>>
>>>>>>>> #!/bin/sh
>>>>>>>>
>>>>>>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>>>>>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>>>>>>
>>>>>>>> AMI_ID=ami-2f726546
>>>>>>>>
>>>>>>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s
>>>>>>>> 10 -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>>>>>>> launch marcotest
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hmm -- That is strange. Can you paste the command you are using to
>>>>>>>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>>>>>>>> script using the guidelines at
>>>>>>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>>>>>>
>>>>>>>>> Shivaram
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Shivaram,
>>>>>>>>>>
>>>>>>>>>> OK so let's assume the script CANNOT take a different user and
>>>>>>>>>> that it must be 'root'. The typical workaround is as you said, allow the
>>>>>>>>>> ssh with the root user. Now, don't laugh, but, this worked last Friday, but
>>>>>>>>>> today (Monday) it no longer works. :D Why? ...
>>>>>>>>>>
>>>>>>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the
>>>>>>>>>> root user's 'authorized_keys' file is always overwritten. This means the
>>>>>>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>>>>>>
>>>>>>>>>> Just to point out, I am trying to make this work with a
>>>>>>>>>> paravirtual instance and not an HVM instance.
>>>>>>>>>>
>>>>>>>>>> Please and thanks,
>>>>>>>>>> Marco.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Right now the spark-ec2 scripts assume that you have root access
>>>>>>>>>>> and a lot of internal scripts assume have the user's home directory hard
>>>>>>>>>>> coded as /root.   However all the Spark AMIs we build should have root ssh
>>>>>>>>>>> access -- Do you find this not to be the case ?
>>>>>>>>>>>
>>>>>>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>> Shivaram
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi all,
>>>>>>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled
>>>>>>>>>>>> for ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>>>>>>
>>>>>>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up
>>>>>>>>>>>> for ssh instead of 'root'.
>>>>>>>>>>>>
>>>>>>>>>>>> I can see that the Spark-EC2 script allows you to specify which
>>>>>>>>>>>> user to log in with, but even when I change this, the script fails for
>>>>>>>>>>>> various reasons. And the output SEEMS that the script is still based on the
>>>>>>>>>>>> specified user's home directory being '/root'.
>>>>>>>>>>>>
>>>>>>>>>>>> Am I using this script wrong?
>>>>>>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>>>>>>> Any ideas?
>>>>>>>>>>>>
>>>>>>>>>>>> Please and thank you,
>>>>>>>>>>>> Marco.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
Ah, tried that. I believe this is an HVM AMI? We are exploring paravirtual
AMIs.


On Wed, Apr 9, 2014 at 11:17 AM, Nicholas Chammas <
nicholas.chammas@gmail.com> wrote:

> And for the record, that AMI is ami-35b1885c. Again, you don't need to
> specify it explicitly; spark-ec2 will default to it.
>
>
> On Wed, Apr 9, 2014 at 11:08 AM, Nicholas Chammas <
> nicholas.chammas@gmail.com> wrote:
>
>> Marco,
>>
>> If you call spark-ec2 launch without specifying an AMI, it will default
>> to the Spark-provided AMI.
>>
>> Nick
>>
>>
>> On Wed, Apr 9, 2014 at 9:43 AM, Marco Costantini <
>> silvio.costantini@granatads.com> wrote:
>>
>>> Hi there,
>>> To answer your question; no there is no reason NOT to use an AMI that
>>> Spark has prepared. The reason we haven't is that we were not aware such
>>> AMIs existed. Would you kindly point us to the documentation where we can
>>> read about this further?
>>>
>>> Many many thanks, Shivaram.
>>> Marco.
>>>
>>>
>>> On Tue, Apr 8, 2014 at 4:42 PM, Shivaram Venkataraman <
>>> shivaram@eecs.berkeley.edu> wrote:
>>>
>>>> Is there any reason why you want to start with a vanilla amazon AMI
>>>> rather than the ones we build and provide as a part of Spark EC2 scripts ?
>>>> The AMIs we provide are close to the vanilla AMI but have the root account
>>>> setup properly and install packages like java that are used by Spark.
>>>>
>>>> If you wish to customize the AMI, you could always start with our AMI
>>>> and add more packages you like -- I have definitely done this recently and
>>>> it works with HVM and PVM as far as I can tell.
>>>>
>>>> Shivaram
>>>>
>>>>
>>>> On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
>>>> silvio.costantini@granatads.com> wrote:
>>>>
>>>>> I was able to keep the "workaround" ...around... by overwriting the
>>>>> generated '/root/.ssh/authorized_keys' file with a known good one, in the
>>>>> '/etc/rc.local' file
>>>>>
>>>>>
>>>>> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
>>>>> silvio.costantini@granatads.com> wrote:
>>>>>
>>>>>> Another thing I didn't mention. The AMI and user used: naturally I've
>>>>>> created several of my own AMIs with the following characteristics. None of
>>>>>> which worked.
>>>>>>
>>>>>> 1) Enabling ssh as root as per this guide (
>>>>>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>>>>>> When doing this, I do not specify a user for the spark-ec2 script. What
>>>>>> happens is that, it works! But only while it's alive. If I stop the
>>>>>> instance, create an AMI, and launch a new instance based from the new AMI,
>>>>>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>>>>>
>>>>>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>>>>>> ec2-user does not have to use sudo to perform any operations needing root
>>>>>> privilidges. When doing this, I specify the user 'ec2-user' for the
>>>>>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>>>>>
>>>>>> I believe HVMs still work. But it would be valuable to the community
>>>>>> to know that the root user work-around does/doesn't work any more for
>>>>>> paravirtual instances.
>>>>>>
>>>>>> Thanks,
>>>>>> Marco.
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>
>>>>>>> As requested, here is the script I am running. It is a simple shell
>>>>>>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>>>>>>> directory of spark, as usual. The AMI used is the raw one from the AWS
>>>>>>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>>>>>>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>>>>>>> thank you.
>>>>>>>
>>>>>>>
>>>>>>> #!/bin/sh
>>>>>>>
>>>>>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>>>>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>>>>>
>>>>>>> AMI_ID=ami-2f726546
>>>>>>>
>>>>>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s
>>>>>>> 10 -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>>>>>> launch marcotest
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hmm -- That is strange. Can you paste the command you are using to
>>>>>>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>>>>>>> script using the guidelines at
>>>>>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>>>>>
>>>>>>>> Shivaram
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Shivaram,
>>>>>>>>>
>>>>>>>>> OK so let's assume the script CANNOT take a different user and
>>>>>>>>> that it must be 'root'. The typical workaround is as you said, allow the
>>>>>>>>> ssh with the root user. Now, don't laugh, but, this worked last Friday, but
>>>>>>>>> today (Monday) it no longer works. :D Why? ...
>>>>>>>>>
>>>>>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the
>>>>>>>>> root user's 'authorized_keys' file is always overwritten. This means the
>>>>>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>>>>>
>>>>>>>>> Just to point out, I am trying to make this work with a
>>>>>>>>> paravirtual instance and not an HVM instance.
>>>>>>>>>
>>>>>>>>> Please and thanks,
>>>>>>>>> Marco.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Right now the spark-ec2 scripts assume that you have root access
>>>>>>>>>> and a lot of internal scripts assume have the user's home directory hard
>>>>>>>>>> coded as /root.   However all the Spark AMIs we build should have root ssh
>>>>>>>>>> access -- Do you find this not to be the case ?
>>>>>>>>>>
>>>>>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> Shivaram
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi all,
>>>>>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled
>>>>>>>>>>> for ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>>>>>
>>>>>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for
>>>>>>>>>>> ssh instead of 'root'.
>>>>>>>>>>>
>>>>>>>>>>> I can see that the Spark-EC2 script allows you to specify which
>>>>>>>>>>> user to log in with, but even when I change this, the script fails for
>>>>>>>>>>> various reasons. And the output SEEMS that the script is still based on the
>>>>>>>>>>> specified user's home directory being '/root'.
>>>>>>>>>>>
>>>>>>>>>>> Am I using this script wrong?
>>>>>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>>>>>> Any ideas?
>>>>>>>>>>>
>>>>>>>>>>> Please and thank you,
>>>>>>>>>>> Marco.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Nicholas Chammas <ni...@gmail.com>.
And for the record, that AMI is ami-35b1885c. Again, you don't need to
specify it explicitly; spark-ec2 will default to it.


On Wed, Apr 9, 2014 at 11:08 AM, Nicholas Chammas <
nicholas.chammas@gmail.com> wrote:

> Marco,
>
> If you call spark-ec2 launch without specifying an AMI, it will default to
> the Spark-provided AMI.
>
> Nick
>
>
> On Wed, Apr 9, 2014 at 9:43 AM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> Hi there,
>> To answer your question; no there is no reason NOT to use an AMI that
>> Spark has prepared. The reason we haven't is that we were not aware such
>> AMIs existed. Would you kindly point us to the documentation where we can
>> read about this further?
>>
>> Many many thanks, Shivaram.
>> Marco.
>>
>>
>> On Tue, Apr 8, 2014 at 4:42 PM, Shivaram Venkataraman <
>> shivaram@eecs.berkeley.edu> wrote:
>>
>>> Is there any reason why you want to start with a vanilla amazon AMI
>>> rather than the ones we build and provide as a part of Spark EC2 scripts ?
>>> The AMIs we provide are close to the vanilla AMI but have the root account
>>> setup properly and install packages like java that are used by Spark.
>>>
>>> If you wish to customize the AMI, you could always start with our AMI
>>> and add more packages you like -- I have definitely done this recently and
>>> it works with HVM and PVM as far as I can tell.
>>>
>>> Shivaram
>>>
>>>
>>> On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
>>> silvio.costantini@granatads.com> wrote:
>>>
>>>> I was able to keep the "workaround" ...around... by overwriting the
>>>> generated '/root/.ssh/authorized_keys' file with a known good one, in the
>>>> '/etc/rc.local' file
>>>>
>>>>
>>>> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
>>>> silvio.costantini@granatads.com> wrote:
>>>>
>>>>> Another thing I didn't mention. The AMI and user used: naturally I've
>>>>> created several of my own AMIs with the following characteristics. None of
>>>>> which worked.
>>>>>
>>>>> 1) Enabling ssh as root as per this guide (
>>>>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>>>>> When doing this, I do not specify a user for the spark-ec2 script. What
>>>>> happens is that, it works! But only while it's alive. If I stop the
>>>>> instance, create an AMI, and launch a new instance based from the new AMI,
>>>>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>>>>
>>>>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>>>>> ec2-user does not have to use sudo to perform any operations needing root
>>>>> privilidges. When doing this, I specify the user 'ec2-user' for the
>>>>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>>>>
>>>>> I believe HVMs still work. But it would be valuable to the community
>>>>> to know that the root user work-around does/doesn't work any more for
>>>>> paravirtual instances.
>>>>>
>>>>> Thanks,
>>>>> Marco.
>>>>>
>>>>>
>>>>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>>>>> silvio.costantini@granatads.com> wrote:
>>>>>
>>>>>> As requested, here is the script I am running. It is a simple shell
>>>>>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>>>>>> directory of spark, as usual. The AMI used is the raw one from the AWS
>>>>>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>>>>>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>>>>>> thank you.
>>>>>>
>>>>>>
>>>>>> #!/bin/sh
>>>>>>
>>>>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>>>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>>>>
>>>>>> AMI_ID=ami-2f726546
>>>>>>
>>>>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s
>>>>>> 10 -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>>>>> launch marcotest
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>
>>>>>>> Hmm -- That is strange. Can you paste the command you are using to
>>>>>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>>>>>> script using the guidelines at
>>>>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>>>>
>>>>>>> Shivaram
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>
>>>>>>>> Hi Shivaram,
>>>>>>>>
>>>>>>>> OK so let's assume the script CANNOT take a different user and that
>>>>>>>> it must be 'root'. The typical workaround is as you said, allow the ssh
>>>>>>>> with the root user. Now, don't laugh, but, this worked last Friday, but
>>>>>>>> today (Monday) it no longer works. :D Why? ...
>>>>>>>>
>>>>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>>>>>>>> user's 'authorized_keys' file is always overwritten. This means the
>>>>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>>>>
>>>>>>>> Just to point out, I am trying to make this work with a paravirtual
>>>>>>>> instance and not an HVM instance.
>>>>>>>>
>>>>>>>> Please and thanks,
>>>>>>>> Marco.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Right now the spark-ec2 scripts assume that you have root access
>>>>>>>>> and a lot of internal scripts assume have the user's home directory hard
>>>>>>>>> coded as /root.   However all the Spark AMIs we build should have root ssh
>>>>>>>>> access -- Do you find this not to be the case ?
>>>>>>>>>
>>>>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>> Shivaram
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi all,
>>>>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled
>>>>>>>>>> for ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>>>>
>>>>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for
>>>>>>>>>> ssh instead of 'root'.
>>>>>>>>>>
>>>>>>>>>> I can see that the Spark-EC2 script allows you to specify which
>>>>>>>>>> user to log in with, but even when I change this, the script fails for
>>>>>>>>>> various reasons. And the output SEEMS that the script is still based on the
>>>>>>>>>> specified user's home directory being '/root'.
>>>>>>>>>>
>>>>>>>>>> Am I using this script wrong?
>>>>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>>>>> Any ideas?
>>>>>>>>>>
>>>>>>>>>> Please and thank you,
>>>>>>>>>> Marco.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Nicholas Chammas <ni...@gmail.com>.
Marco,

If you call spark-ec2 launch without specifying an AMI, it will default to
the Spark-provided AMI.

Nick


On Wed, Apr 9, 2014 at 9:43 AM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> Hi there,
> To answer your question; no there is no reason NOT to use an AMI that
> Spark has prepared. The reason we haven't is that we were not aware such
> AMIs existed. Would you kindly point us to the documentation where we can
> read about this further?
>
> Many many thanks, Shivaram.
> Marco.
>
>
> On Tue, Apr 8, 2014 at 4:42 PM, Shivaram Venkataraman <
> shivaram@eecs.berkeley.edu> wrote:
>
>> Is there any reason why you want to start with a vanilla amazon AMI
>> rather than the ones we build and provide as a part of Spark EC2 scripts ?
>> The AMIs we provide are close to the vanilla AMI but have the root account
>> setup properly and install packages like java that are used by Spark.
>>
>> If you wish to customize the AMI, you could always start with our AMI and
>> add more packages you like -- I have definitely done this recently and it
>> works with HVM and PVM as far as I can tell.
>>
>> Shivaram
>>
>>
>> On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
>> silvio.costantini@granatads.com> wrote:
>>
>>> I was able to keep the "workaround" ...around... by overwriting the
>>> generated '/root/.ssh/authorized_keys' file with a known good one, in the
>>> '/etc/rc.local' file
>>>
>>>
>>> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
>>> silvio.costantini@granatads.com> wrote:
>>>
>>>> Another thing I didn't mention. The AMI and user used: naturally I've
>>>> created several of my own AMIs with the following characteristics. None of
>>>> which worked.
>>>>
>>>> 1) Enabling ssh as root as per this guide (
>>>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>>>> When doing this, I do not specify a user for the spark-ec2 script. What
>>>> happens is that, it works! But only while it's alive. If I stop the
>>>> instance, create an AMI, and launch a new instance based from the new AMI,
>>>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>>>
>>>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>>>> ec2-user does not have to use sudo to perform any operations needing root
>>>> privilidges. When doing this, I specify the user 'ec2-user' for the
>>>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>>>
>>>> I believe HVMs still work. But it would be valuable to the community to
>>>> know that the root user work-around does/doesn't work any more for
>>>> paravirtual instances.
>>>>
>>>> Thanks,
>>>> Marco.
>>>>
>>>>
>>>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>>>> silvio.costantini@granatads.com> wrote:
>>>>
>>>>> As requested, here is the script I am running. It is a simple shell
>>>>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>>>>> directory of spark, as usual. The AMI used is the raw one from the AWS
>>>>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>>>>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>>>>> thank you.
>>>>>
>>>>>
>>>>> #!/bin/sh
>>>>>
>>>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>>>
>>>>> AMI_ID=ami-2f726546
>>>>>
>>>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10
>>>>> -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>>>> launch marcotest
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>
>>>>>> Hmm -- That is strange. Can you paste the command you are using to
>>>>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>>>>> script using the guidelines at
>>>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>>>
>>>>>> Shivaram
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>
>>>>>>> Hi Shivaram,
>>>>>>>
>>>>>>> OK so let's assume the script CANNOT take a different user and that
>>>>>>> it must be 'root'. The typical workaround is as you said, allow the ssh
>>>>>>> with the root user. Now, don't laugh, but, this worked last Friday, but
>>>>>>> today (Monday) it no longer works. :D Why? ...
>>>>>>>
>>>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>>>>>>> user's 'authorized_keys' file is always overwritten. This means the
>>>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>>>
>>>>>>> Just to point out, I am trying to make this work with a paravirtual
>>>>>>> instance and not an HVM instance.
>>>>>>>
>>>>>>> Please and thanks,
>>>>>>> Marco.
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>>
>>>>>>>> Right now the spark-ec2 scripts assume that you have root access
>>>>>>>> and a lot of internal scripts assume have the user's home directory hard
>>>>>>>> coded as /root.   However all the Spark AMIs we build should have root ssh
>>>>>>>> access -- Do you find this not to be the case ?
>>>>>>>>
>>>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Shivaram
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>>
>>>>>>>>> Hi all,
>>>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled
>>>>>>>>> for ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>>>
>>>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for
>>>>>>>>> ssh instead of 'root'.
>>>>>>>>>
>>>>>>>>> I can see that the Spark-EC2 script allows you to specify which
>>>>>>>>> user to log in with, but even when I change this, the script fails for
>>>>>>>>> various reasons. And the output SEEMS that the script is still based on the
>>>>>>>>> specified user's home directory being '/root'.
>>>>>>>>>
>>>>>>>>> Am I using this script wrong?
>>>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>>>> Any ideas?
>>>>>>>>>
>>>>>>>>> Please and thank you,
>>>>>>>>> Marco.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
Hi there,
To answer your question; no there is no reason NOT to use an AMI that Spark
has prepared. The reason we haven't is that we were not aware such AMIs
existed. Would you kindly point us to the documentation where we can read
about this further?

Many many thanks, Shivaram.
Marco.


On Tue, Apr 8, 2014 at 4:42 PM, Shivaram Venkataraman <
shivaram@eecs.berkeley.edu> wrote:

> Is there any reason why you want to start with a vanilla amazon AMI rather
> than the ones we build and provide as a part of Spark EC2 scripts ? The
> AMIs we provide are close to the vanilla AMI but have the root account
> setup properly and install packages like java that are used by Spark.
>
> If you wish to customize the AMI, you could always start with our AMI and
> add more packages you like -- I have definitely done this recently and it
> works with HVM and PVM as far as I can tell.
>
> Shivaram
>
>
> On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> I was able to keep the "workaround" ...around... by overwriting the
>> generated '/root/.ssh/authorized_keys' file with a known good one, in the
>> '/etc/rc.local' file
>>
>>
>> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
>> silvio.costantini@granatads.com> wrote:
>>
>>> Another thing I didn't mention. The AMI and user used: naturally I've
>>> created several of my own AMIs with the following characteristics. None of
>>> which worked.
>>>
>>> 1) Enabling ssh as root as per this guide (
>>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>>> When doing this, I do not specify a user for the spark-ec2 script. What
>>> happens is that, it works! But only while it's alive. If I stop the
>>> instance, create an AMI, and launch a new instance based from the new AMI,
>>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>>
>>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>>> ec2-user does not have to use sudo to perform any operations needing root
>>> privilidges. When doing this, I specify the user 'ec2-user' for the
>>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>>
>>> I believe HVMs still work. But it would be valuable to the community to
>>> know that the root user work-around does/doesn't work any more for
>>> paravirtual instances.
>>>
>>> Thanks,
>>> Marco.
>>>
>>>
>>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>>> silvio.costantini@granatads.com> wrote:
>>>
>>>> As requested, here is the script I am running. It is a simple shell
>>>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>>>> directory of spark, as usual. The AMI used is the raw one from the AWS
>>>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>>>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>>>> thank you.
>>>>
>>>>
>>>> #!/bin/sh
>>>>
>>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>>
>>>> AMI_ID=ami-2f726546
>>>>
>>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10
>>>> -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>>> launch marcotest
>>>>
>>>>
>>>>
>>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>
>>>>> Hmm -- That is strange. Can you paste the command you are using to
>>>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>>>> script using the guidelines at
>>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>>
>>>>> Shivaram
>>>>>
>>>>>
>>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>>> silvio.costantini@granatads.com> wrote:
>>>>>
>>>>>> Hi Shivaram,
>>>>>>
>>>>>> OK so let's assume the script CANNOT take a different user and that
>>>>>> it must be 'root'. The typical workaround is as you said, allow the ssh
>>>>>> with the root user. Now, don't laugh, but, this worked last Friday, but
>>>>>> today (Monday) it no longer works. :D Why? ...
>>>>>>
>>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>>>>>> user's 'authorized_keys' file is always overwritten. This means the
>>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>>
>>>>>> Just to point out, I am trying to make this work with a paravirtual
>>>>>> instance and not an HVM instance.
>>>>>>
>>>>>> Please and thanks,
>>>>>> Marco.
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>>
>>>>>>> Right now the spark-ec2 scripts assume that you have root access and
>>>>>>> a lot of internal scripts assume have the user's home directory hard coded
>>>>>>> as /root.   However all the Spark AMIs we build should have root ssh access
>>>>>>> -- Do you find this not to be the case ?
>>>>>>>
>>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>>
>>>>>>> Thanks
>>>>>>> Shivaram
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>>
>>>>>>>> Hi all,
>>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for
>>>>>>>> ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>>
>>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for
>>>>>>>> ssh instead of 'root'.
>>>>>>>>
>>>>>>>> I can see that the Spark-EC2 script allows you to specify which
>>>>>>>> user to log in with, but even when I change this, the script fails for
>>>>>>>> various reasons. And the output SEEMS that the script is still based on the
>>>>>>>> specified user's home directory being '/root'.
>>>>>>>>
>>>>>>>> Am I using this script wrong?
>>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>>> Any ideas?
>>>>>>>>
>>>>>>>> Please and thank you,
>>>>>>>> Marco.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Shivaram Venkataraman <sh...@eecs.berkeley.edu>.
Is there any reason why you want to start with a vanilla amazon AMI rather
than the ones we build and provide as a part of Spark EC2 scripts ? The
AMIs we provide are close to the vanilla AMI but have the root account
setup properly and install packages like java that are used by Spark.

If you wish to customize the AMI, you could always start with our AMI and
add more packages you like -- I have definitely done this recently and it
works with HVM and PVM as far as I can tell.

Shivaram


On Tue, Apr 8, 2014 at 8:50 AM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> I was able to keep the "workaround" ...around... by overwriting the
> generated '/root/.ssh/authorized_keys' file with a known good one, in the
> '/etc/rc.local' file
>
>
> On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> Another thing I didn't mention. The AMI and user used: naturally I've
>> created several of my own AMIs with the following characteristics. None of
>> which worked.
>>
>> 1) Enabling ssh as root as per this guide (
>> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
>> When doing this, I do not specify a user for the spark-ec2 script. What
>> happens is that, it works! But only while it's alive. If I stop the
>> instance, create an AMI, and launch a new instance based from the new AMI,
>> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>>
>> 2) adding the 'ec2-user' to the 'root' group. This means that the
>> ec2-user does not have to use sudo to perform any operations needing root
>> privilidges. When doing this, I specify the user 'ec2-user' for the
>> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>>
>> I believe HVMs still work. But it would be valuable to the community to
>> know that the root user work-around does/doesn't work any more for
>> paravirtual instances.
>>
>> Thanks,
>> Marco.
>>
>>
>> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
>> silvio.costantini@granatads.com> wrote:
>>
>>> As requested, here is the script I am running. It is a simple shell
>>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>>> directory of spark, as usual. The AMI used is the raw one from the AWS
>>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>>> thank you.
>>>
>>>
>>> #!/bin/sh
>>>
>>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>>
>>> AMI_ID=ami-2f726546
>>>
>>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10
>>> -v 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge
>>> launch marcotest
>>>
>>>
>>>
>>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>>> shivaram.venkataraman@gmail.com> wrote:
>>>
>>>> Hmm -- That is strange. Can you paste the command you are using to
>>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>>> script using the guidelines at
>>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>>
>>>> Shivaram
>>>>
>>>>
>>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>>> silvio.costantini@granatads.com> wrote:
>>>>
>>>>> Hi Shivaram,
>>>>>
>>>>> OK so let's assume the script CANNOT take a different user and that it
>>>>> must be 'root'. The typical workaround is as you said, allow the ssh with
>>>>> the root user. Now, don't laugh, but, this worked last Friday, but today
>>>>> (Monday) it no longer works. :D Why? ...
>>>>>
>>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>>>>> user's 'authorized_keys' file is always overwritten. This means the
>>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>>
>>>>> Just to point out, I am trying to make this work with a paravirtual
>>>>> instance and not an HVM instance.
>>>>>
>>>>> Please and thanks,
>>>>> Marco.
>>>>>
>>>>>
>>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>>
>>>>>> Right now the spark-ec2 scripts assume that you have root access and
>>>>>> a lot of internal scripts assume have the user's home directory hard coded
>>>>>> as /root.   However all the Spark AMIs we build should have root ssh access
>>>>>> -- Do you find this not to be the case ?
>>>>>>
>>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>>
>>>>>> Thanks
>>>>>> Shivaram
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>>> silvio.costantini@granatads.com> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for
>>>>>>> ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>>
>>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>>>>>>> instead of 'root'.
>>>>>>>
>>>>>>> I can see that the Spark-EC2 script allows you to specify which user
>>>>>>> to log in with, but even when I change this, the script fails for various
>>>>>>> reasons. And the output SEEMS that the script is still based on the
>>>>>>> specified user's home directory being '/root'.
>>>>>>>
>>>>>>> Am I using this script wrong?
>>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>>> Any ideas?
>>>>>>>
>>>>>>> Please and thank you,
>>>>>>> Marco.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
I was able to keep the "workaround" ...around... by overwriting the
generated '/root/.ssh/authorized_keys' file with a known good one, in the
'/etc/rc.local' file


On Tue, Apr 8, 2014 at 10:12 AM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> Another thing I didn't mention. The AMI and user used: naturally I've
> created several of my own AMIs with the following characteristics. None of
> which worked.
>
> 1) Enabling ssh as root as per this guide (
> http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
> When doing this, I do not specify a user for the spark-ec2 script. What
> happens is that, it works! But only while it's alive. If I stop the
> instance, create an AMI, and launch a new instance based from the new AMI,
> the change I made in the '/root/.ssh/authorized_keys' file is overwritten
>
> 2) adding the 'ec2-user' to the 'root' group. This means that the ec2-user
> does not have to use sudo to perform any operations needing root
> privilidges. When doing this, I specify the user 'ec2-user' for the
> spark-ec2 script. An error occurs: rsync fails with exit code 23.
>
> I believe HVMs still work. But it would be valuable to the community to
> know that the root user work-around does/doesn't work any more for
> paravirtual instances.
>
> Thanks,
> Marco.
>
>
> On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> As requested, here is the script I am running. It is a simple shell
>> script which calls spark-ec2 wrapper script. I execute it from the 'ec2'
>> directory of spark, as usual. The AMI used is the raw one from the AWS
>> Quick Start section. It is the first option (an Amazon Linux paravirtual
>> image). Any ideas or confirmation would be GREATLY appreciated. Please and
>> thank you.
>>
>>
>> #!/bin/sh
>>
>> export AWS_ACCESS_KEY_ID=MyCensoredKey
>> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>>
>> AMI_ID=ami-2f726546
>>
>> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10 -v
>> 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge launch
>> marcotest
>>
>>
>>
>> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
>> shivaram.venkataraman@gmail.com> wrote:
>>
>>> Hmm -- That is strange. Can you paste the command you are using to
>>> launch the instances ? The typical workflow is to use the spark-ec2 wrapper
>>> script using the guidelines at
>>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>>
>>> Shivaram
>>>
>>>
>>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>>> silvio.costantini@granatads.com> wrote:
>>>
>>>> Hi Shivaram,
>>>>
>>>> OK so let's assume the script CANNOT take a different user and that it
>>>> must be 'root'. The typical workaround is as you said, allow the ssh with
>>>> the root user. Now, don't laugh, but, this worked last Friday, but today
>>>> (Monday) it no longer works. :D Why? ...
>>>>
>>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>>>> user's 'authorized_keys' file is always overwritten. This means the
>>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>>
>>>> Just to point out, I am trying to make this work with a paravirtual
>>>> instance and not an HVM instance.
>>>>
>>>> Please and thanks,
>>>> Marco.
>>>>
>>>>
>>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>>> shivaram.venkataraman@gmail.com> wrote:
>>>>
>>>>> Right now the spark-ec2 scripts assume that you have root access and a
>>>>> lot of internal scripts assume have the user's home directory hard coded as
>>>>> /root.   However all the Spark AMIs we build should have root ssh access --
>>>>> Do you find this not to be the case ?
>>>>>
>>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>>
>>>>> Thanks
>>>>> Shivaram
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>>> silvio.costantini@granatads.com> wrote:
>>>>>
>>>>>> Hi all,
>>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for
>>>>>> ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>>
>>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>>>>>> instead of 'root'.
>>>>>>
>>>>>> I can see that the Spark-EC2 script allows you to specify which user
>>>>>> to log in with, but even when I change this, the script fails for various
>>>>>> reasons. And the output SEEMS that the script is still based on the
>>>>>> specified user's home directory being '/root'.
>>>>>>
>>>>>> Am I using this script wrong?
>>>>>> Has anyone had success with this 'ec2-user' user?
>>>>>> Any ideas?
>>>>>>
>>>>>> Please and thank you,
>>>>>> Marco.
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
Another thing I didn't mention. The AMI and user used: naturally I've
created several of my own AMIs with the following characteristics. None of
which worked.

1) Enabling ssh as root as per this guide (
http://blog.tiger-workshop.com/enable-root-access-on-amazon-ec2-instance/).
When doing this, I do not specify a user for the spark-ec2 script. What
happens is that, it works! But only while it's alive. If I stop the
instance, create an AMI, and launch a new instance based from the new AMI,
the change I made in the '/root/.ssh/authorized_keys' file is overwritten

2) adding the 'ec2-user' to the 'root' group. This means that the ec2-user
does not have to use sudo to perform any operations needing root
privilidges. When doing this, I specify the user 'ec2-user' for the
spark-ec2 script. An error occurs: rsync fails with exit code 23.

I believe HVMs still work. But it would be valuable to the community to
know that the root user work-around does/doesn't work any more for
paravirtual instances.

Thanks,
Marco.


On Tue, Apr 8, 2014 at 9:51 AM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> As requested, here is the script I am running. It is a simple shell script
> which calls spark-ec2 wrapper script. I execute it from the 'ec2' directory
> of spark, as usual. The AMI used is the raw one from the AWS Quick Start
> section. It is the first option (an Amazon Linux paravirtual image). Any
> ideas or confirmation would be GREATLY appreciated. Please and thank you.
>
>
> #!/bin/sh
>
> export AWS_ACCESS_KEY_ID=MyCensoredKey
> export AWS_SECRET_ACCESS_KEY=MyCensoredKey
>
> AMI_ID=ami-2f726546
>
> ./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10 -v
> 0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge launch
> marcotest
>
>
>
> On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
> shivaram.venkataraman@gmail.com> wrote:
>
>> Hmm -- That is strange. Can you paste the command you are using to launch
>> the instances ? The typical workflow is to use the spark-ec2 wrapper script
>> using the guidelines at
>> http://spark.apache.org/docs/latest/ec2-scripts.html
>>
>> Shivaram
>>
>>
>> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
>> silvio.costantini@granatads.com> wrote:
>>
>>> Hi Shivaram,
>>>
>>> OK so let's assume the script CANNOT take a different user and that it
>>> must be 'root'. The typical workaround is as you said, allow the ssh with
>>> the root user. Now, don't laugh, but, this worked last Friday, but today
>>> (Monday) it no longer works. :D Why? ...
>>>
>>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>>> user's 'authorized_keys' file is always overwritten. This means the
>>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>>
>>> Just to point out, I am trying to make this work with a paravirtual
>>> instance and not an HVM instance.
>>>
>>> Please and thanks,
>>> Marco.
>>>
>>>
>>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>>> shivaram.venkataraman@gmail.com> wrote:
>>>
>>>> Right now the spark-ec2 scripts assume that you have root access and a
>>>> lot of internal scripts assume have the user's home directory hard coded as
>>>> /root.   However all the Spark AMIs we build should have root ssh access --
>>>> Do you find this not to be the case ?
>>>>
>>>> You can also enable root ssh access in a vanilla AMI by editing
>>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>>
>>>> Thanks
>>>> Shivaram
>>>>
>>>>
>>>>
>>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>>> silvio.costantini@granatads.com> wrote:
>>>>
>>>>> Hi all,
>>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for
>>>>> ssh. Also, it is the default user for the Spark-EC2 script.
>>>>>
>>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>>>>> instead of 'root'.
>>>>>
>>>>> I can see that the Spark-EC2 script allows you to specify which user
>>>>> to log in with, but even when I change this, the script fails for various
>>>>> reasons. And the output SEEMS that the script is still based on the
>>>>> specified user's home directory being '/root'.
>>>>>
>>>>> Am I using this script wrong?
>>>>> Has anyone had success with this 'ec2-user' user?
>>>>> Any ideas?
>>>>>
>>>>> Please and thank you,
>>>>> Marco.
>>>>>
>>>>
>>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
As requested, here is the script I am running. It is a simple shell script
which calls spark-ec2 wrapper script. I execute it from the 'ec2' directory
of spark, as usual. The AMI used is the raw one from the AWS Quick Start
section. It is the first option (an Amazon Linux paravirtual image). Any
ideas or confirmation would be GREATLY appreciated. Please and thank you.


#!/bin/sh

export AWS_ACCESS_KEY_ID=MyCensoredKey
export AWS_SECRET_ACCESS_KEY=MyCensoredKey

AMI_ID=ami-2f726546

./spark-ec2 -k gds-generic -i ~/.ssh/gds-generic.pem -u ec2-user -s 10 -v
0.9.0 -w 300 --no-ganglia -a ${AMI_ID} -m m3.2xlarge -t m3.2xlarge launch
marcotest



On Mon, Apr 7, 2014 at 6:21 PM, Shivaram Venkataraman <
shivaram.venkataraman@gmail.com> wrote:

> Hmm -- That is strange. Can you paste the command you are using to launch
> the instances ? The typical workflow is to use the spark-ec2 wrapper script
> using the guidelines at
> http://spark.apache.org/docs/latest/ec2-scripts.html
>
> Shivaram
>
>
> On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> Hi Shivaram,
>>
>> OK so let's assume the script CANNOT take a different user and that it
>> must be 'root'. The typical workaround is as you said, allow the ssh with
>> the root user. Now, don't laugh, but, this worked last Friday, but today
>> (Monday) it no longer works. :D Why? ...
>>
>> ...It seems that NOW, when you launch a 'paravirtual' ami, the root
>> user's 'authorized_keys' file is always overwritten. This means the
>> workaround doesn't work anymore! I would LOVE for someone to verify this.
>>
>> Just to point out, I am trying to make this work with a paravirtual
>> instance and not an HVM instance.
>>
>> Please and thanks,
>> Marco.
>>
>>
>> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
>> shivaram.venkataraman@gmail.com> wrote:
>>
>>> Right now the spark-ec2 scripts assume that you have root access and a
>>> lot of internal scripts assume have the user's home directory hard coded as
>>> /root.   However all the Spark AMIs we build should have root ssh access --
>>> Do you find this not to be the case ?
>>>
>>> You can also enable root ssh access in a vanilla AMI by editing
>>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>>
>>> Thanks
>>> Shivaram
>>>
>>>
>>>
>>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>>> silvio.costantini@granatads.com> wrote:
>>>
>>>> Hi all,
>>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for
>>>> ssh. Also, it is the default user for the Spark-EC2 script.
>>>>
>>>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>>>> instead of 'root'.
>>>>
>>>> I can see that the Spark-EC2 script allows you to specify which user to
>>>> log in with, but even when I change this, the script fails for various
>>>> reasons. And the output SEEMS that the script is still based on the
>>>> specified user's home directory being '/root'.
>>>>
>>>> Am I using this script wrong?
>>>> Has anyone had success with this 'ec2-user' user?
>>>> Any ideas?
>>>>
>>>> Please and thank you,
>>>> Marco.
>>>>
>>>
>>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Shivaram Venkataraman <sh...@gmail.com>.
Hmm -- That is strange. Can you paste the command you are using to launch
the instances ? The typical workflow is to use the spark-ec2 wrapper script
using the guidelines at http://spark.apache.org/docs/latest/ec2-scripts.html

Shivaram


On Mon, Apr 7, 2014 at 1:53 PM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> Hi Shivaram,
>
> OK so let's assume the script CANNOT take a different user and that it
> must be 'root'. The typical workaround is as you said, allow the ssh with
> the root user. Now, don't laugh, but, this worked last Friday, but today
> (Monday) it no longer works. :D Why? ...
>
> ...It seems that NOW, when you launch a 'paravirtual' ami, the root user's
> 'authorized_keys' file is always overwritten. This means the workaround
> doesn't work anymore! I would LOVE for someone to verify this.
>
> Just to point out, I am trying to make this work with a paravirtual
> instance and not an HVM instance.
>
> Please and thanks,
> Marco.
>
>
> On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
> shivaram.venkataraman@gmail.com> wrote:
>
>> Right now the spark-ec2 scripts assume that you have root access and a
>> lot of internal scripts assume have the user's home directory hard coded as
>> /root.   However all the Spark AMIs we build should have root ssh access --
>> Do you find this not to be the case ?
>>
>> You can also enable root ssh access in a vanilla AMI by editing
>> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>>
>> Thanks
>> Shivaram
>>
>>
>>
>> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
>> silvio.costantini@granatads.com> wrote:
>>
>>> Hi all,
>>> On the old Amazon Linux EC2 images, the user 'root' was enabled for ssh.
>>> Also, it is the default user for the Spark-EC2 script.
>>>
>>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>>> instead of 'root'.
>>>
>>> I can see that the Spark-EC2 script allows you to specify which user to
>>> log in with, but even when I change this, the script fails for various
>>> reasons. And the output SEEMS that the script is still based on the
>>> specified user's home directory being '/root'.
>>>
>>> Am I using this script wrong?
>>> Has anyone had success with this 'ec2-user' user?
>>> Any ideas?
>>>
>>> Please and thank you,
>>> Marco.
>>>
>>
>>
>

Re: AWS Spark-ec2 script with different user

Posted by Marco Costantini <si...@granatads.com>.
Hi Shivaram,

OK so let's assume the script CANNOT take a different user and that it must
be 'root'. The typical workaround is as you said, allow the ssh with the
root user. Now, don't laugh, but, this worked last Friday, but today
(Monday) it no longer works. :D Why? ...

...It seems that NOW, when you launch a 'paravirtual' ami, the root user's
'authorized_keys' file is always overwritten. This means the workaround
doesn't work anymore! I would LOVE for someone to verify this.

Just to point out, I am trying to make this work with a paravirtual
instance and not an HVM instance.

Please and thanks,
Marco.


On Mon, Apr 7, 2014 at 4:40 PM, Shivaram Venkataraman <
shivaram.venkataraman@gmail.com> wrote:

> Right now the spark-ec2 scripts assume that you have root access and a lot
> of internal scripts assume have the user's home directory hard coded as
> /root.   However all the Spark AMIs we build should have root ssh access --
> Do you find this not to be the case ?
>
> You can also enable root ssh access in a vanilla AMI by editing
> /etc/ssh/sshd_config and setting "PermitRootLogin" to yes
>
> Thanks
> Shivaram
>
>
>
> On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
> silvio.costantini@granatads.com> wrote:
>
>> Hi all,
>> On the old Amazon Linux EC2 images, the user 'root' was enabled for ssh.
>> Also, it is the default user for the Spark-EC2 script.
>>
>> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
>> instead of 'root'.
>>
>> I can see that the Spark-EC2 script allows you to specify which user to
>> log in with, but even when I change this, the script fails for various
>> reasons. And the output SEEMS that the script is still based on the
>> specified user's home directory being '/root'.
>>
>> Am I using this script wrong?
>> Has anyone had success with this 'ec2-user' user?
>> Any ideas?
>>
>> Please and thank you,
>> Marco.
>>
>
>

Re: AWS Spark-ec2 script with different user

Posted by Shivaram Venkataraman <sh...@gmail.com>.
Right now the spark-ec2 scripts assume that you have root access and a lot
of internal scripts assume have the user's home directory hard coded as
/root.   However all the Spark AMIs we build should have root ssh access --
Do you find this not to be the case ?

You can also enable root ssh access in a vanilla AMI by editing
/etc/ssh/sshd_config and setting "PermitRootLogin" to yes

Thanks
Shivaram



On Mon, Apr 7, 2014 at 11:14 AM, Marco Costantini <
silvio.costantini@granatads.com> wrote:

> Hi all,
> On the old Amazon Linux EC2 images, the user 'root' was enabled for ssh.
> Also, it is the default user for the Spark-EC2 script.
>
> Currently, the Amazon Linux images have an 'ec2-user' set up for ssh
> instead of 'root'.
>
> I can see that the Spark-EC2 script allows you to specify which user to
> log in with, but even when I change this, the script fails for various
> reasons. And the output SEEMS that the script is still based on the
> specified user's home directory being '/root'.
>
> Am I using this script wrong?
> Has anyone had success with this 'ec2-user' user?
> Any ideas?
>
> Please and thank you,
> Marco.
>