You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2021/03/11 13:02:00 UTC

[jira] [Resolved] (HADOOP-16721) Improve S3A rename resilience

     [ https://issues.apache.org/jira/browse/HADOOP-16721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran resolved HADOOP-16721.
-------------------------------------
    Fix Version/s: 3.3.1
       Resolution: Fixed

> Improve S3A rename resilience
> -----------------------------
>
>                 Key: HADOOP-16721
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16721
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 3.3.1
>
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> h3. race condition in delete/rename overlap
> If you have multiple threads on a system doing rename operations, then one thread doing a delete(dest/subdir) may delete the last file under a subdir, and, before its listed and recreated any parent dir marker -other threads may conclude there's an empty dest dir and fail.
> This is most likely on an overloaded system with many threads executing rename operations, as with parallel copying taking place there are many threads to schedule and https connections to pool. 
> h3. failure reporting
> the classic {[rename(source, dest)}} operation returns {{false}} on certain failures, which, while somewhat consistent with the posix APIs, turns out to be useless for identifying the cause of problems. Applications tend to have code which goes
> {code}
> if (!fs.rename(src, dest)) throw new IOException("rename failed and we don't know why");
> {code}
> This change modifies S3A FS To
> # raise FileNotFoundException if the source is missing
> # raise FileAlreadyExistsException if the destination isn't suitable for the source (source is a dir and the dest is one of: a file, a non-empty directory)
> It still returns false for "no-op renames" , e.g where source == dest.
> Other stores raise the same exceptions, with this change S3A moves away from consistency with HDFS to one where applications find out what is wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org