You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2019/08/20 16:56:00 UTC
[jira] [Updated] (HADOOP-16490) S3GuardExistsRetryPolicy handle
FNFE eventual consistency better
[ https://issues.apache.org/jira/browse/HADOOP-16490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran updated HADOOP-16490:
------------------------------------
Description:
If S3Guard is encountering delayed consistency (FNFE from tombstone; failure to open file) then
* it only retries with the same times as everything else. We should make it differently configurable
* when an FNFE is finally thrown, rename() treats it as being caused by the original source path missing, when in fact its something else. Proposed: somehow propagate the failure up differently, probably in the S3AFileSystem.copyFile() code
* don't do HEAD checks when creating files
* shell commands to avoid deleteOnExit calls as these also generate HEAD calls by way of exists() checks
was:
If S3Guard is encountering delayed consistency (FNFE from tombstone; failure to open file) then
* it only retries with the same times as everything else. We should make it differently configurable
* when an FNFE is finally thrown, rename() treats it as being caused by the original source path missing, when in fact its something else. Proposed: somehow propagate the failure up differently, probably in the S3AFileSystem.copyFile() code
> S3GuardExistsRetryPolicy handle FNFE eventual consistency better
> ----------------------------------------------------------------
>
> Key: HADOOP-16490
> URL: https://issues.apache.org/jira/browse/HADOOP-16490
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.3.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
>
> If S3Guard is encountering delayed consistency (FNFE from tombstone; failure to open file) then
> * it only retries with the same times as everything else. We should make it differently configurable
> * when an FNFE is finally thrown, rename() treats it as being caused by the original source path missing, when in fact its something else. Proposed: somehow propagate the failure up differently, probably in the S3AFileSystem.copyFile() code
> * don't do HEAD checks when creating files
> * shell commands to avoid deleteOnExit calls as these also generate HEAD calls by way of exists() checks
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org