You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Nick Pentreath <ni...@gmail.com> on 2014/05/16 21:53:05 UTC

spark-submit / S3

Hi

I see from the docs for 1.0.0 that the new "spark-submit" mechanism seems
to support specifying the jar with hdfs:// or http://

Does this support S3? (It doesn't seem to as I have tried it on EC2 but
doesn't seem to work):

./bin/spark-submit --master local[2] --class myclass s3n://bucket/myapp.jar
<args>