You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2015/03/09 22:33:40 UTC

[jira] [Updated] (HADOOP-11572) s3a delete() operation fails during a concurrent delete of child entries

     [ https://issues.apache.org/jira/browse/HADOOP-11572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran updated HADOOP-11572:
------------------------------------
    Parent Issue: HADOOP-11694  (was: HADOOP-11571)

> s3a delete() operation fails during a concurrent delete of child entries
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-11572
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11572
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Steve Loughran
>            Assignee: Varun Saxena
>             Fix For: 2.7.0
>
>
> Reviewing the code, s3a has the problem raised in HADOOP-6688: deletion of a child entry during a recursive directory delete is propagated as an exception, rather than ignored as a detail which idempotent operations should just ignore.
> the exception should be caught and, if a file not found problem, logged rather than propagated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)