You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/04/01 12:10:53 UTC

[jira] [Resolved] (SPARK-6600) Open ports in ec2/spark_ec2.py to allow HDFS NFS gateway

     [ https://issues.apache.org/jira/browse/SPARK-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-6600.
------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.0

Issue resolved by pull request 5257
[https://github.com/apache/spark/pull/5257]

> Open ports in ec2/spark_ec2.py to allow HDFS NFS gateway  
> ----------------------------------------------------------
>
>                 Key: SPARK-6600
>                 URL: https://issues.apache.org/jira/browse/SPARK-6600
>             Project: Spark
>          Issue Type: New Feature
>          Components: EC2
>            Reporter: Florian Verhein
>             Fix For: 1.4.0
>
>
> Use case: User has set up the hadoop hdfs nfs gateway service on their spark_ec2.py launched cluster, and wants to mount that on their local machine. 
> Requires the following ports to be opened on incoming rule set for MASTER for both UDP and TCP: 111, 2049, 4242.
> (I have tried this and it works)
> Note that this issue *does not* cover the implementation of a hdfs nfs gateway module in the spark-ec2 project. See linked issue. 
> Reference:
> https://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org