You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2007/01/09 22:47:27 UTC
[jira] Resolved: (HADOOP-865) Files written to S3 but never closed
can't be deleted
[ https://issues.apache.org/jira/browse/HADOOP-865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Doug Cutting resolved HADOOP-865.
---------------------------------
Resolution: Fixed
Fix Version/s: 0.10.1
Assignee: Tom White
I just committed this. Thanks, Tom!
> Files written to S3 but never closed can't be deleted
> -----------------------------------------------------
>
> Key: HADOOP-865
> URL: https://issues.apache.org/jira/browse/HADOOP-865
> Project: Hadoop
> Issue Type: Bug
> Components: fs
> Reporter: Bryan Pendleton
> Assigned To: Tom White
> Fix For: 0.10.1
>
> Attachments: hadoop-865.patch
>
>
> I've been playing with the S3 integration. My first attempts to use it are actually as a drop-in replacement for a backup job, streaming data offsite by piping the backup job output to a "hadoop dfs -put - targetfile".
> If enough errors occur posting to S3 (this happened easily last Thursday, during an S3 growth issue), the write can eventually fail. At that point, there are both blocks and a partial INode written into S3. Doing a "hadoop dfs -ls filename" shows the file, it has a non-zero size, etc. However, trying to "hadoop dfs -rm filename" a failed-written file results in the response "rm: No such file or directory."
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira