You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2017/06/26 20:30:00 UTC

[jira] [Updated] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries

     [ https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Wang updated HADOOP-11572:
---------------------------------
    Fix Version/s: 3.0.0-alpha4

> s3a delete() operation fails during a concurrent delete of child entries
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-11572
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11572
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>             Fix For: 2.9.0, 3.0.0-alpha4
>
>         Attachments: HADOOP-11572-001.patch, HADOOP-11572-branch-2-002.patch, HADOOP-11572-branch-2-003.patch
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a child entry during a recursive directory delete is propagated as an exception, rather than ignored as a detail which idempotent operations should just ignore.
> the exception should be caught and, if a file not found problem, logged rather than propagated



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org