You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Aljoscha Krettek (JIRA)" <ji...@apache.org> on 2018/07/27 11:54:00 UTC

[jira] [Closed] (FLINK-8439) Add Flink shading to AWS credential provider s3 hadoop config

     [ https://issues.apache.org/jira/browse/FLINK-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Aljoscha Krettek closed FLINK-8439.
-----------------------------------
       Resolution: Fixed
    Fix Version/s:     (was: 1.5.3)
                       (was: 1.4.3)
                   1.6.0

Implemented on master in
7be07871c23b56547add4cd85e15b95c757f882b

Implemented on release-1.6 in
76bc0e96f58f450a2b96c240290fddb269931f06

> Add Flink shading to AWS credential provider s3 hadoop config
> -------------------------------------------------------------
>
>                 Key: FLINK-8439
>                 URL: https://issues.apache.org/jira/browse/FLINK-8439
>             Project: Flink
>          Issue Type: Improvement
>          Components: Documentation
>            Reporter: Dyana Rose
>            Assignee: Andrey Zagrebin
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.6.0
>
>
> This came up when using the s3 for the file system backend and running under ECS.
> With no credentials in the container, hadoop-aws will default to EC2 instance level credentials when accessing S3. However when running under ECS, you will generally want to default to the task definition's IAM role.
> In this case you need to set the hadoop property
> {code:java}
> fs.s3a.aws.credentials.provider{code}
> to a fully qualified class name(s). see [hadoop-aws docs|https://github.com/apache/hadoop/blob/1ba491ff907fc5d2618add980734a3534e2be098/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md]
> This works as expected when you add this setting to flink-conf.yaml but there is a further 'gotcha.'  Because the AWS sdk is shaded, the actual full class name for, in this case, the ContainerCredentialsProvider is
> {code:java}
> org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
>  
> meaning the full setting is:
> {code:java}
> fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
> If you instead set it to the unshaded class name you will see a very confusing error stating that the ContainerCredentialsProvider doesn't implement AWSCredentialsProvider (which it most certainly does.)
> Adding this information (how to specify alternate Credential Providers, and the name space gotcha) to the [AWS deployment docs|https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html] would be useful to anyone else using S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)