You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Bryan Pendleton (JIRA)" <ji...@apache.org> on 2007/01/11 20:24:27 UTC

[jira] Commented: (HADOOP-882) S3FileSystem should retry if there is a communication problem with S3

    [ https://issues.apache.org/jira/browse/HADOOP-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463992 ] 

Bryan Pendleton commented on HADOOP-882:
----------------------------------------

Argh. Jira just ate my comment, this will be a terser version.
-- 
Retry levels should be configurable, up to the point of infinite retry. Long running stream operations are better not dying, even if they have to wait while S3 fixes hardware shortages/failures.

Not sure if it's a separate issue, but failed writes aren't cleaned-up very well right now. In DFS, a file that isn't closed doesn't exist for other operations - if possible, there should at least be a way to find out if a file in S3 is "done", preferably it should be invisible to "normal" operations while its state is not final.

> S3FileSystem should retry if there is a communication problem with S3
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-882
>                 URL: https://issues.apache.org/jira/browse/HADOOP-882
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>         Assigned To: Tom White
>
> File system operations currently fail if there is a communication problem (IOException) with S3. All operations that communicate with S3 should retry a fixed number of times before failing.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira