You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2019/10/03 09:59:01 UTC

[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

    [ https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943464#comment-16943464 ] 

Steve Loughran commented on HADOOP-16626:
-----------------------------------------

The text file system created here. has list access but not Head/Get access.

Looking at the stack I don't see how the raw check could work at all here, 
Because we call getFileStatus before the LIST. With S3Guard, fine,
provided the entry is in the table. But raw -it should always fail.

So why don't I see that? As I am clearing the bucket settings?
I will look with a debugger.

FWIW, I do hope/plan to actually remove those getFileStatus calls
before list operations which are normally called against directories
-the list* operations, essentially. They should do the list first,
And only if that fails to find anything, fallback to the getFileStatus
probes for file or marker. This should make a big difference during query planning, and stop markers being mistaken to empty directories.

This means whatever changes I do to fix this regression will have to be rolled back later. Never mind

Thanks for finding this. 

> S3A ITestRestrictedReadAccess fails
> -----------------------------------
>
>                 Key: HADOOP-16626
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16626
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Siddharth Seth
>            Assignee: Steve Loughran
>            Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~stevel@apache.org]
> {code}
> -------------------------------------------------------------------------------
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> -------------------------------------------------------------------------------
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)  Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on test/testNoReadAccess-raw/noReadDir/emptyDir/: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FE8B4D6F25648BCD; S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403 Forbidden
>         at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
>         at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
>         at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
>         at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>         at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>         at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>         at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FE8B4D6F25648BCD; S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
>         at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
>         at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
>         at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
>         at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
>         at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
>         at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1320)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$5(S3AFileSystem.java:1682)
>         at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
>         at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1675)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1651)
>         at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2758)
>         ... 23 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org