You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/07/09 19:49:00 UTC

[jira] [Commented] (HADOOP-15569) Expand S3A Assumed Role docs

    [ https://issues.apache.org/jira/browse/HADOOP-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537464#comment-16537464 ] 

Steve Loughran commented on HADOOP-15569:
-----------------------------------------

You need to be able to call getBucketLocation to bootstrap S3Guard: update docs to cover this. 
{code}

[ERROR] testAssumeRoleRestrictedPolicyFS(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole)  Time elapsed: 2.448 s  <<< ERROR!
java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: getBucketLocation() on hwdev-steve-ireland-new: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 8EE9E27CFB336A85; S3 Extended Request ID: s6rL7a9k3DjhDtjhV8Lza0hKToYGJOhwh0pCNWVZocW7ElBzeZ/aOh3m6O5aMAj4HO6QH89KGpg=), S3 Extended Request ID: s6rL7a9k3DjhDtjhV8Lza0hKToYGJOhwh0pCNWVZocW7ElBzeZ/aOh3m6O5aMAj4HO6QH89KGpg=:AccessDenied
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:226)
	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
	at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:530)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:518)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:294)
	at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:341)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
	at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testAssumeRoleRestrictedPolicyFS(ITestAssumeRole.java:311)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 8EE9E27CFB336A85; S3 Extended Request ID: s6rL7a9k3DjhDtjhV8Lza0hKToYGJOhwh0pCNWVZocW7ElBzeZ/aOh3m6O5aMAj4HO6QH89KGpg=), S3 Extended Request ID: s6rL7a9k3DjhDtjhV8Lza0hKToYGJOhwh0pCNWVZocW7ElBzeZ/aOh3m6O5aMAj4HO6QH89KGpg=
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4266)
	at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:949)
	at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:955)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getBucketLocation$3(S3AFileSystem.java:531)
	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
	... 25 more
{code}


> Expand S3A Assumed Role docs
> ----------------------------
>
>                 Key: HADOOP-15569
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15569
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: documentation, fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>         Attachments: HADOOP-15569-001.patch
>
>
> The  S3A assumed role doc is now where we document the permissions needed to work with buckets
> # detail the permissions you need for s3guard user and admin ops
> # and what you need for SSE-KMS
> This involves me working them out, so presumably get some new stack traces
> also: fix any errors noted in the doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org