You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/06/03 20:46:59 UTC

[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials

    [ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15314772#comment-15314772 ] 

Steve Loughran commented on HADOOP-13237:
-----------------------------------------

stack
{code}
16/06/03 21:40:37 INFO BlockManagerMasterEndpoint: Registering block manager localhost:60011 with 511.1 MB RAM, BlockManagerId(driver, localhost, 60011)
16/06/03 21:40:37 INFO BlockManagerMaster: Registered BlockManager
16/06/03 21:40:39 ERROR S3ALineCount: Failed to execute line count
org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:82)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:300)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:267)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
	at org.apache.spark.cloud.s3.examples.S3ALineCount$.innerMain(S3ALineCount.scala:75)
	at org.apache.spark.cloud.s3.examples.S3ALineCount$.main(S3ALineCount.scala:50)
	at org.apache.spark.cloud.s3.examples.S3ALineCount.main(S3ALineCount.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3779)
	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107)
	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:288)
	... 18 more
{code}

> s3a initialization against public bucket fails if caller lacks any credentials
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-13237
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13237
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>
> If an S3 bucket is public, anyone should be able to read from it.
> However, you cannot create an s3a client bonded to a public bucket unless you have some credentials; the {{doesBucketExist()}} check rejects the call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org