You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/02/27 08:15:45 UTC

[jira] [Assigned] (SPARK-19739) SparkHadoopUtil.appendS3AndSparkHadoopConfigurations to propagate full set of AWS env vars

     [ https://issues.apache.org/jira/browse/SPARK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-19739:
------------------------------------

    Assignee:     (was: Apache Spark)

> SparkHadoopUtil.appendS3AndSparkHadoopConfigurations to propagate full set of AWS env vars
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19739
>                 URL: https://issues.apache.org/jira/browse/SPARK-19739
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Steve Loughran
>            Priority: Minor
>
> {{SparkHadoopUtil.appendS3AndSparkHadoopConfigurations()}} propagates the AWS user and secret key to s3n and s3a config options, so getting secrets from the user to the cluster, if set.
> AWS also supports session authentication (env var {{AWS_SESSION_TOKEN}}) and region endpoints {{AWS_DEFAULT_REGION}}, the latter being critical if you want to address V4-auth-only endpoints like frankfurt and Seol. 
> These env vars should be picked up and passed down to S3a too. 4+ lines of code, though impossible to test unless the existing code is refactored to take the env var map[String, String], so allowing a test suite to set the values in itds own map.
> side issue: what if only half the env vars are set and users are trying to understand why auth is failing? It may be good to build up a string identifying which env vars had their value propagate, and log that @ debug, while not logging the values, obviously.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org