You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Chris Bevard (Jira)" <ji...@apache.org> on 2023/04/17 17:46:00 UTC

[jira] [Created] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads.

Chris Bevard created HADOOP-18706:
-------------------------------------

             Summary: The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 
                 Key: HADOOP-18706
                 URL: https://issues.apache.org/jira/browse/HADOOP-18706
             Project: Hadoop Common
          Issue Type: Improvement
          Components: fs/s3
            Reporter: Chris Bevard


If an application crashes during an S3ABlockOutputStream upload, it's possible to complete the upload if fast.upload.buffer is set to disk by uploading the s3ablock file with putObject as the final part of the multipart upload. If the application has multiple uploads running in parallel though and they're on the same part number when the application fails, then there is no way to determine which file belongs to which object, and recovery of either upload is impossible.

If the temporary file name for disk buffering included the s3 key, then every partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org