You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Alexander Hristov <al...@planetalia.com> on 2012/09/29 17:37:01 UTC
Hadoop 0.23.3 and Amazon S3
Hi Again
I have problems trying to make Hadoop use S3 or S3N as filesystem.
This is what I have in core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>s3n://bucketname</value>
</property>
<property>
<name>fs.s3.awsAccessKeyId</name>
<value> something <value>
</property>
<property>
<name>fs.s3.awsSecretAccessKey</name>
<value> something </value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value> something </value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value> something </value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop</value>
</property>
</configuration>
The secret key does not contain any slashes.
When I use s3n://buckename, I get this:
[hadoop@ahristov hadoop]$ hadoop fs -put LICENSE.txt /
put: org.jets3t.service.S3ServiceException: S3 HEAD request failed for
'/LICENSE.txt' - ResponseCode=403, ResponseMessage=Forbidden
And when I use s3://bucketname, I get this:
[hadoop@ahristov hadoop]$ hadoop fs -put LICENSE.txt /
put: `/': No such file or directory
I couldn't find any logs generated anywhere.
On the other hand, if I use a quick and dirty Java snippet to achieve
the same, like:
Configuration conf = new Configuration();
conf.addResource(TestS3.class.getResourceAsStream("/res/core-s3.xml"));
FileSystem fileSystem = FileSystem.get(conf);
InputStream in = TestS3.class.getResourceAsStream("/res/test.txt");
FSDataOutputStream out = fileSystem.create(new Path("/book.txt"));
byte[] buffer = new byte[10240];
while (true) {
int read= in.read(buffer);
if (read== -1) break;
out.write(buffer,0,read);
}
out.close();
in.close();
It works both with s3:// and s3n://
Regards
Alexander