You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Tom White (JIRA)" <ji...@apache.org> on 2008/05/07 17:02:56 UTC
[jira] Assigned: (HADOOP-930) Add support for reading regular
(non-block-based) files from S3 in S3FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tom White reassigned HADOOP-930:
--------------------------------
Assignee: Tom White
> Add support for reading regular (non-block-based) files from S3 in S3FileSystem
> -------------------------------------------------------------------------------
>
> Key: HADOOP-930
> URL: https://issues.apache.org/jira/browse/HADOOP-930
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Affects Versions: 0.10.1
> Reporter: Tom White
> Assignee: Tom White
> Attachments: hadoop-930-v2.patch, hadoop-930.patch, jets3t-0.6.0.jar
>
>
> People often have input data on S3 that they want to use for a Map Reduce job and the current S3FileSystem implementation cannot read it since it assumes a block-based format.
> We would add the following metadata to files written by S3FileSystem: an indication that it is block oriented ("S3FileSystem.type=block") and a filesystem version number ("S3FileSystem.version=1.0"). Regular S3 files would not have the type metadata so S3FileSystem would not try to interpret them as inodes.
> An extension to write regular files to S3 would not be covered by this change - we could do this as a separate piece of work (we still need to decide whether to introduce another scheme - e.g. rename block-based S3 to "s3fs" and call regular S3 "s3" - or whether to just use a configuration property to control block-based vs. regular writes).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.