You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2020/02/17 22:10:01 UTC
[jira] [Updated] (HADOOP-15961) S3A committers: make sure there's
regular progress() calls
[ https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran updated HADOOP-15961:
------------------------------------
Fix Version/s: 3.3.0
Resolution: Fixed
Status: Resolved (was: Patch Available)
fixed in trunk; thanks for the patch, apologies for the very late review...catching up on all of these before we ship!
> S3A committers: make sure there's regular progress() calls
> ----------------------------------------------------------
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Steve Loughran
> Assignee: lqjacklee
> Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, HADOOP-15961-003.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks after every part upload.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org