You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Brad Willard (JIRA)" <ji...@apache.org> on 2014/12/30 18:03:13 UTC

[jira] [Created] (SPARK-5008) Persistent HDFS does not recognize EBS Volumes

Brad Willard created SPARK-5008:
-----------------------------------

             Summary: Persistent HDFS does not recognize EBS Volumes
                 Key: SPARK-5008
                 URL: https://issues.apache.org/jira/browse/SPARK-5008
             Project: Spark
          Issue Type: Bug
    Affects Versions: 1.2.0
         Environment: 8 Node Cluster Generated from 1.2.0 spark-ec2 script.
-m c3.2xlarge -t c3.8xlarge --ebs-vol-size 300 --ebs-vol-type gp2 --ebs-vol-num 1
            Reporter: Brad Willard


Cluster is built with correct size EBS volumes. It creates the volume at /dev/xvds and it mounted to /vol0. However when you start persistent hdfs with start-all script, it starts but it isn't correctly configured to use the EBS volume.

I'm assuming some sym links or expected mounts are not correctly configured.

This has worked flawlessly on all previous versions of spark.

I have a stupid workaround by installing pssh and mucking with it by mounting it to /vol, which worked, however it doesn't not work between restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org