You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Tank (Jira)" <ji...@apache.org> on 2019/12/12 07:21:00 UTC

[jira] [Updated] (FLINK-15215) Not able to provide a custom AWS credentials provider with flink-s3-fs-hadoop

     [ https://issues.apache.org/jira/browse/FLINK-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tank updated FLINK-15215:
-------------------------
    Description: 
I am using Flink 1.9.0 on EMR (emr 5.28), with StreamingFileSink using an S3 filesystem.

I want flink to write to an S3 bucket which is running in another AWS account, and I want to do that by assuming a role in the other account. This can be easily accomplished by providing a custom credential provider (similar to [https://aws.amazon.com/blogs/big-data/securely-analyze-data-from-another-aws-account-with-emrfs/])

 

As described in https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/#pluggable-file-systems, I copied {{flink-s3-fs-hadoop-1.9.0.jar}} to the plugins directory. But the configuration parameter 'fs.s3a.aws.credentials.provider' is getting shaded [https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/S3FileSystemFactory.java#L47]}}, and so are all the aws sdk dependencies, so when I provide a custom credential provider, it complained that I was not implementing the correct interface (AWSCredentialsProvider) 

The fix made in https://issues.apache.org/jira/browse/FLINK-13044 allows users to use one of the built-in credential providers like `_InstanceProfileCredentialsProvider`,_ but still does not help with providing custom credential providers.

 

Related: https://issues.apache.org/jira/browse/FLINK-13602

  was:
I am using Flink 1.9.0 on EMR (emr 5.28), with StreamingFileSink using an S3 filesystem.

I want flink to write to an S3 bucket which is running in another AWS account, and I want to do that by assuming a role in the other account. This can be easily accomplished by providing a custom credential provider (similar to [https://aws.amazon.com/blogs/big-data/securely-analyze-data-from-another-aws-account-with-emrfs/])

 

As described [here|#pluggable-file-systems]], I copied {{flink-s3-fs-hadoop-1.9.0.jar}} to the plugins directory. But the configuration parameter 'fs.s3a.aws.credentials.provider' is getting shaded [https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/S3FileSystemFactory.java#L47]}}, and so are all the aws sdk dependencies, so when I provide a custom credential provider, it complained that I was not implementing the correct interface (AWSCredentialsProvider) 

The fix made in https://issues.apache.org/jira/browse/FLINK-13044 allows users to use one of the built-in credential providers like `_InstanceProfileCredentialsProvider`,_ but still does not help with providing custom credential providers.

 

Related: https://issues.apache.org/jira/browse/FLINK-13602


> Not able to provide a custom AWS credentials provider with flink-s3-fs-hadoop
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-15215
>                 URL: https://issues.apache.org/jira/browse/FLINK-15215
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystems
>    Affects Versions: 1.9.0
>            Reporter: Tank
>            Priority: Major
>
> I am using Flink 1.9.0 on EMR (emr 5.28), with StreamingFileSink using an S3 filesystem.
> I want flink to write to an S3 bucket which is running in another AWS account, and I want to do that by assuming a role in the other account. This can be easily accomplished by providing a custom credential provider (similar to [https://aws.amazon.com/blogs/big-data/securely-analyze-data-from-another-aws-account-with-emrfs/])
>  
> As described in https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/#pluggable-file-systems, I copied {{flink-s3-fs-hadoop-1.9.0.jar}} to the plugins directory. But the configuration parameter 'fs.s3a.aws.credentials.provider' is getting shaded [https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/S3FileSystemFactory.java#L47]}}, and so are all the aws sdk dependencies, so when I provide a custom credential provider, it complained that I was not implementing the correct interface (AWSCredentialsProvider) 
> The fix made in https://issues.apache.org/jira/browse/FLINK-13044 allows users to use one of the built-in credential providers like `_InstanceProfileCredentialsProvider`,_ but still does not help with providing custom credential providers.
>  
> Related: https://issues.apache.org/jira/browse/FLINK-13602



--
This message was sent by Atlassian Jira
(v8.3.4#803005)