You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Vladimir Grigor (JIRA)" <ji...@apache.org> on 2015/01/14 07:48:34 UTC

[jira] [Created] (SPARK-5242) "ec2/spark_ec2.py lauch" does not work with VPC if no public DNS or IP is available

Vladimir Grigor created SPARK-5242:
--------------------------------------

             Summary: "ec2/spark_ec2.py lauch" does not work with VPC if no public DNS or IP is available
                 Key: SPARK-5242
                 URL: https://issues.apache.org/jira/browse/SPARK-5242
             Project: Spark
          Issue Type: Bug
          Components: EC2
            Reporter: Vladimir Grigor


How to reproduce: user starting cluster in VPC needs to wait forever:
{code}
./spark-ec2 -k key20141114 -i ~/aws/key.pem -s 1 --region=eu-west-1 --spark-version=1.2.0 --instance-type=m1.large --vpc-id=vpc-2e71dd46 --subnet-id=subnet-2571dd4d --zone=eu-west-1a  launch SparkByScript
Setting up security groups...
Searching for existing cluster SparkByScript...
Spark AMI: ami-1ae0166d
Launching instances...
Launched 1 slaves in eu-west-1a, regid = r-e70c5502
Launched master in eu-west-1a, regid = r-bf0f565a
Waiting for cluster to enter 'ssh-ready' state..........{forever}
{code}

Problem is that current code makes wrong assumption that VPC instance has public_dns_name or public ip_address. Actually more common is that VPC instance has only private_ip_address.


The bug is already fixed in my fork, I am going to submit pull request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org