You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sankar Mittapally (JIRA)" <ji...@apache.org> on 2016/10/12 10:48:20 UTC

[jira] [Created] (SPARK-17889) How to update sc.hadoopConfiguration.set properties in standalone spark cluster for SparkR

Sankar Mittapally created SPARK-17889:
-----------------------------------------

             Summary: How to update sc.hadoopConfiguration.set properties in standalone spark cluster for SparkR
                 Key: SPARK-17889
                 URL: https://issues.apache.org/jira/browse/SPARK-17889
             Project: Spark
          Issue Type: Request
          Components: SparkR
    Affects Versions: 2.0.0
            Reporter: Sankar Mittapally


Hello,

 I have downloaded below jars
aws-java-sdk-1.7.4.jar 
hadoop-aws-2.7.1.jar 

for the spark version - 2.0.0 and hadoop version - 2.7.1 

I want to access the S3 bucket from sparkR, But I am not sure how to update these properties.


sc.hadoopConfiguration.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", accessKey)
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", secretKey)

Please let me know in which file i need to update these or Do I need to pass these things during runtime.

Thanks in adv
sankar




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org