You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/07/24 13:51:00 UTC

[jira] [Commented] (FLINK-8439) Document using a custom AWS Credentials Provider with flink-3s-fs-hadoop

    [ https://issues.apache.org/jira/browse/FLINK-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554261#comment-16554261 ] 

ASF GitHub Bot commented on FLINK-8439:
---------------------------------------

Github user azagrebin commented on the issue:

    https://github.com/apache/flink/pull/6405
  
    cc @GJL @StephanEwen


> Document using a custom AWS Credentials Provider with flink-3s-fs-hadoop
> ------------------------------------------------------------------------
>
>                 Key: FLINK-8439
>                 URL: https://issues.apache.org/jira/browse/FLINK-8439
>             Project: Flink
>          Issue Type: Improvement
>          Components: Documentation
>            Reporter: Dyana Rose
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.4.3, 1.5.3
>
>
> This came up when using the s3 for the file system backend and running under ECS.
> With no credentials in the container, hadoop-aws will default to EC2 instance level credentials when accessing S3. However when running under ECS, you will generally want to default to the task definition's IAM role.
> In this case you need to set the hadoop property
> {code:java}
> fs.s3a.aws.credentials.provider{code}
> to a fully qualified class name(s). see [hadoop-aws docs|https://github.com/apache/hadoop/blob/1ba491ff907fc5d2618add980734a3534e2be098/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md]
> This works as expected when you add this setting to flink-conf.yaml but there is a further 'gotcha.'  Because the AWS sdk is shaded, the actual full class name for, in this case, the ContainerCredentialsProvider is
> {code:java}
> org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
>  
> meaning the full setting is:
> {code:java}
> fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.auth.ContainerCredentialsProvider{code}
> If you instead set it to the unshaded class name you will see a very confusing error stating that the ContainerCredentialsProvider doesn't implement AWSCredentialsProvider (which it most certainly does.)
> Adding this information (how to specify alternate Credential Providers, and the name space gotcha) to the [AWS deployment docs|https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html] would be useful to anyone else using S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)