You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "mehakmeet (via GitHub)" <gi...@apache.org> on 2023/04/11 09:25:12 UTC

[GitHub] [hadoop] mehakmeet commented on a diff in pull request #5481: HADOOP-18637:S3A to support upload of files greater than 2 GB using DiskBlocks

mehakmeet commented on code in PR #5481:
URL: https://github.com/apache/hadoop/pull/5481#discussion_r1162432674


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java:
##########
@@ -217,6 +217,10 @@ protected AbstractS3ACommitter(
     LOG.debug("{} instantiated for job \"{}\" ID {} with destination {}",
         role, jobName(context), jobIdString(context), outputPath);
     S3AFileSystem fs = getDestS3AFS();
+    if (!fs.isMultipartUploadEnabled()) {

Review Comment:
   So we want to fail for any s3a committer initialization if the multipart is disabled? iirc magic committer does require multipart but should we be failing for others as well? CC @steveloughran 
   
   also seems like alot of tests are failing when I run the suite on default props(by default this should be true and not fail here) could be due to UTs using  "MockS3AFileSystem" which doesn't actually initialize and set the variable.



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java:
##########
@@ -369,6 +373,8 @@ private synchronized void uploadCurrentBlock(boolean isLast)
    */
   @Retries.RetryTranslated
   private void initMultipartUpload() throws IOException {
+    Preconditions.checkState(!isMultipartUploadEnabled,

Review Comment:
   +1



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org