You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Andrew Olson (Jira)" <ji...@apache.org> on 2020/03/02 22:17:00 UTC

[jira] [Created] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem

Andrew Olson created HADOOP-16900:
-------------------------------------

             Summary: Very large files can be truncated when written through S3AFileSystem
                 Key: HADOOP-16900
                 URL: https://issues.apache.org/jira/browse/HADOOP-16900
             Project: Hadoop Common
          Issue Type: Bug
          Components: fs/s3
            Reporter: Andrew Olson


If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org