You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/10/29 19:54:00 UTC
[jira] [Assigned] (HADOOP-14714) handle InternalError in bulk
object delete through retries
[ https://issues.apache.org/jira/browse/HADOOP-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran reassigned HADOOP-14714:
---------------------------------------
Assignee: (was: Steve Loughran)
> handle InternalError in bulk object delete through retries
> ----------------------------------------------------------
>
> Key: HADOOP-14714
> URL: https://issues.apache.org/jira/browse/HADOOP-14714
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.8.0
> Reporter: Steve Loughran
> Priority: Major
>
> There's some more detail appearing on HADOOP-11572 about the errors seen here; sounds like its large fileset related (or just probability working against you). Most importantly: retries may make it go away.
> Proposed: implement a retry policy.
> Issue: delete is not idempotent, not if someone else adds things.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org