You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2023/05/05 10:59:00 UTC

[jira] [Assigned] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads.

     [ https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran reassigned HADOOP-18706:
---------------------------------------

    Assignee: Chris Bevard

> The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 
> -------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-18706
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18706
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Chris Bevard
>            Assignee: Chris Bevard
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 3.4.0
>
>
> If an application crashes during an S3ABlockOutputStream upload, it's possible to complete the upload if fast.upload.buffer is set to disk by uploading the s3ablock file with putObject as the final part of the multipart upload. If the application has multiple uploads running in parallel though and they're on the same part number when the application fails, then there is no way to determine which file belongs to which object, and recovery of either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org