You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Eduardo Cusa <ed...@usmediaconsulting.com> on 2014/12/18 15:42:21 UTC

EC2 VPC script

Hi guys.

I run the folling command to lauch a new cluster :

./spark-ec2 -k test -i test.pem -s 1  --vpc-id vpc-XXXXX --subnet-id
subnet-XXXXX launch  vpc_spark

The instances started ok but the command never end. With the following
output:


Setting up security groups...
Searching for existing cluster vpc_spark...
Spark AMI: ami-5bb18832
Launching instances...
Launched 1 slaves in us-east-1a, regid = r-e9d603c4
Launched master in us-east-1a, regid = r-89d104a4
Waiting for cluster to enter 'ssh-ready' state...............


any ideas what happend?


regards
Eduardo

Re: EC2 VPC script

Posted by Vladimir Grigor <vl...@kiosked.com>.
I also found this issue. I have reported it as a bug
https://issues.apache.org/jira/browse/SPARK-5242 and submitted a fix. You
can find link to fixed fork in the comments on the issue page. Please vote
on the issue, hopefully guys will accept pull request faster then :)

Regards, Vladimir

On Mon, Dec 29, 2014 at 7:48 PM, Eduardo Cusa <
eduardo.cusa@usmediaconsulting.com> wrote:

> I running the master branch.
>
> Finally I can make it work, changing  all occurrences of "
> *public_dns_name*" property with "*private_ip_address*" in the
> spark_ec2.py script.
>
> My VPC instances always have null value in "*public_dns_name*" property
>
> Now my script only work for VPC instances.
>
> Regards
> Eduardo
>
>
>
>
>
>
>
>
>
>
> On Sat, Dec 20, 2014 at 7:53 PM, Nicholas Chammas <
> nicholas.chammas@gmail.com> wrote:
>
>> What version of the script are you running? What did you see in the EC2
>> web console when this happened?
>>
>> Sometimes instances just don't come up in a reasonable amount of time and
>> you have to kill and restart the process.
>>
>> Does this always happen, or was it just once?
>>
>> Nick
>>
>> On Thu, Dec 18, 2014 at 9:42 AM, Eduardo Cusa <
>> eduardo.cusa@usmediaconsulting.com> wrote:
>>
>>> Hi guys.
>>>
>>> I run the folling command to lauch a new cluster :
>>>
>>> ./spark-ec2 -k test -i test.pem -s 1  --vpc-id vpc-XXXXX --subnet-id
>>> subnet-XXXXX launch  vpc_spark
>>>
>>> The instances started ok but the command never end. With the following
>>> output:
>>>
>>>
>>> Setting up security groups...
>>> Searching for existing cluster vpc_spark...
>>> Spark AMI: ami-5bb18832
>>> Launching instances...
>>> Launched 1 slaves in us-east-1a, regid = r-e9d603c4
>>> Launched master in us-east-1a, regid = r-89d104a4
>>> Waiting for cluster to enter 'ssh-ready' state...............
>>>
>>>
>>> any ideas what happend?
>>>
>>>
>>> regards
>>> Eduardo
>>>
>>>
>>>
>>
>

Re: EC2 VPC script

Posted by Eduardo Cusa <ed...@usmediaconsulting.com>.
I running the master branch.

Finally I can make it work, changing  all occurrences of "*public_dns_name*"
property with "*private_ip_address*" in the spark_ec2.py script.

My VPC instances always have null value in "*public_dns_name*" property

Now my script only work for VPC instances.

Regards
Eduardo










On Sat, Dec 20, 2014 at 7:53 PM, Nicholas Chammas <
nicholas.chammas@gmail.com> wrote:

> What version of the script are you running? What did you see in the EC2
> web console when this happened?
>
> Sometimes instances just don't come up in a reasonable amount of time and
> you have to kill and restart the process.
>
> Does this always happen, or was it just once?
>
> Nick
>
> On Thu, Dec 18, 2014 at 9:42 AM, Eduardo Cusa <
> eduardo.cusa@usmediaconsulting.com> wrote:
>
>> Hi guys.
>>
>> I run the folling command to lauch a new cluster :
>>
>> ./spark-ec2 -k test -i test.pem -s 1  --vpc-id vpc-XXXXX --subnet-id
>> subnet-XXXXX launch  vpc_spark
>>
>> The instances started ok but the command never end. With the following
>> output:
>>
>>
>> Setting up security groups...
>> Searching for existing cluster vpc_spark...
>> Spark AMI: ami-5bb18832
>> Launching instances...
>> Launched 1 slaves in us-east-1a, regid = r-e9d603c4
>> Launched master in us-east-1a, regid = r-89d104a4
>> Waiting for cluster to enter 'ssh-ready' state...............
>>
>>
>> any ideas what happend?
>>
>>
>> regards
>> Eduardo
>>
>>
>>
>

Re: EC2 VPC script

Posted by Nicholas Chammas <ni...@gmail.com>.
What version of the script are you running? What did you see in the EC2 web
console when this happened?

Sometimes instances just don't come up in a reasonable amount of time and
you have to kill and restart the process.

Does this always happen, or was it just once?

Nick

On Thu, Dec 18, 2014 at 9:42 AM, Eduardo Cusa <
eduardo.cusa@usmediaconsulting.com> wrote:

> Hi guys.
>
> I run the folling command to lauch a new cluster :
>
> ./spark-ec2 -k test -i test.pem -s 1  --vpc-id vpc-XXXXX --subnet-id
> subnet-XXXXX launch  vpc_spark
>
> The instances started ok but the command never end. With the following
> output:
>
>
> Setting up security groups...
> Searching for existing cluster vpc_spark...
> Spark AMI: ami-5bb18832
> Launching instances...
> Launched 1 slaves in us-east-1a, regid = r-e9d603c4
> Launched master in us-east-1a, regid = r-89d104a4
> Waiting for cluster to enter 'ssh-ready' state...............
>
>
> any ideas what happend?
>
>
> regards
> Eduardo
>
>
>