You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Vinay (JIRA)" <ji...@apache.org> on 2015/07/08 10:36:04 UTC

[jira] [Commented] (SPARK-7442) Spark 1.3.1 / Hadoop 2.6 prebuilt pacakge has broken S3 filesystem access

    [ https://issues.apache.org/jira/browse/SPARK-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14618222#comment-14618222 ] 

Vinay commented on SPARK-7442:
------------------------------

Tried and tested--

Steps to submit spark job when jar file resides in S3:

Step 1:  Add these dependencies in pom file
A. hadoop-common.jar(optional if already present in class path)
B. hadoop-aws.jar
C. aws-java-sdk.jar

These steps have to be followed for both master & slaves

Step 2:    
export AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY in bash
export AWS_ACCESS_KEY_ID=XXXX
export AWS_SECRET_ACCESS_KEY=YYYY

Note: These properties can also be set as per AWS environment.

Step 3: Add the the below dependencies in "spark-env.sh” these steps to be followed in slaves 

SPARK_CLASSPATH="../lib/hadoop-aws-2.6.0.jar"
SPARK_CLASSPATH="$SPARK_CLASSPATH:../lib/aws-java-sdk-1.7.4.jar" 
SPARK_CLASSPATH="$SPARK_CLASSPATH:..lib/guava-11.0.2.jar"
Note: will be aviable in hadoop or else can download it

Step 4: When running Spark job append "--deploy-mode cluster"

Sample command to submit spark job:
spark-submit --class com.x.y.z --master spark://master:7077 --deploy-mode cluster s3://bucket name/xyz.jar args<>

> Spark 1.3.1 / Hadoop 2.6 prebuilt pacakge has broken S3 filesystem access
> -------------------------------------------------------------------------
>
>                 Key: SPARK-7442
>                 URL: https://issues.apache.org/jira/browse/SPARK-7442
>             Project: Spark
>          Issue Type: Bug
>          Components: Build
>    Affects Versions: 1.3.1
>         Environment: OS X
>            Reporter: Nicholas Chammas
>
> # Download Spark 1.3.1 pre-built for Hadoop 2.6 from the [Spark downloads page|http://spark.apache.org/downloads.html].
> # Add {{localhost}} to your {{slaves}} file and {{start-all.sh}}
> # Fire up PySpark and try reading from S3 with something like this:
>     {code}sc.textFile('s3n://bucket/file_*').count(){code}
> # You will get an error like this:
>     {code}py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
> : java.io.IOException: No FileSystem for scheme: s3n{code}
> {{file:///...}} works. Spark 1.3.1 prebuilt for Hadoop 2.4 works. Spark 1.3.0 works.
> It's just the combination of Spark 1.3.1 prebuilt for Hadoop 2.6 accessing S3 that doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org