You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by "abhishekrb19 (via GitHub)" <gi...@apache.org> on 2023/10/24 00:28:51 UTC

[PR] Update S3 retry logic to account for the underlying cause in case of `IOException` (druid)

abhishekrb19 opened a new pull request, #15238:
URL: https://github.com/apache/druid/pull/15238

   Sometimes we wrap `AmazonS3Exception` inside an `IOException` and _all_ `IOExceptions` by default are retried by the S3 retry logic. For example, a 403 `AccessDenied` code wrapped inside `IOException` shouldn't be retried. 
   
   This PR updates the s3 retry logic to account for the underlying cause if it's found.
   
   
   
   <!-- If there are several relatively logically separate changes in this PR, create a mini-section for each of them. For example: -->
   
   <!--
   In each section, please describe design decisions made, including:
    - Choice of algorithms
    - Behavioral aspects. What configuration values are acceptable? How are corner cases and error conditions handled, such as when there are insufficient resources?
    - Class organization and design (how the logic is split between classes, inheritance, composition, design patterns)
    - Method organization and design (how the logic is split between methods, parameters and return types)
    - Naming (class, method, API, configuration, HTTP endpoint, names of emitted metrics)
   -->
   
   
   <!-- It's good to describe an alternative design (or mention an alternative name) for every design (or naming) decision point and compare the alternatives with the designs that you've implemented (or the names you've chosen) to highlight the advantages of the chosen designs and names. -->
   
   <!-- If there was a discussion of the design of the feature implemented in this PR elsewhere (e. g. a "Proposal" issue, any other issue, or a thread in the development mailing list), link to that discussion from this PR description and explain what have changed in your final design compared to your original proposal or the consensus version in the end of the discussion. If something hasn't changed since the original discussion, you can omit a detailed discussion of those aspects of the design here, perhaps apart from brief mentioning for the sake of readability of this PR description. -->
   
   <!-- Some of the aspects mentioned above may be omitted for simple and small changes. -->
   
   #### Release note
   
   S3 exceptions like 403 AccessDenied codes wrapped inside other exceptions, like `IOException`, won't trigger unnecessary retries.
   
   <!-- Give your best effort to summarize your changes in a couple of sentences aimed toward Druid users. 
   
   If your change doesn't have end user impact, you can skip this section.
   
   For tips about how to write a good release note, see [Release notes](https://github.com/apache/druid/blob/master/CONTRIBUTING.md#release-notes).
   
   -->
   
   
   <hr>
   
   ##### Key changed/added classes in this PR
    * `S3Utils.java`
    * `S3UtilsTest.java`
   
   <hr>
   
   <!-- Check the items by putting "x" in the brackets for the done things. Not all of these items apply to every PR. Remove the items which are not done or not relevant to the PR. None of the items from the checklist below are strictly necessary, but it would be very helpful if you at least self-review the PR. -->
   
   This PR has:
   
   - [x] been self-reviewed.
   - [x] a release note entry in the PR description.
   - [x] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
   - [x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for [code coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md) is met.
   - [x] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


Re: [PR] Update S3 retry logic to account for the underlying cause in case of `IOException` (druid)

Posted by "abhishekrb19 (via GitHub)" <gi...@apache.org>.
abhishekrb19 merged PR #15238:
URL: https://github.com/apache/druid/pull/15238


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


Re: [PR] Update S3 retry logic to account for the underlying cause in case of `IOException` (druid)

Posted by "github-advanced-security[bot] (via GitHub)" <gi...@apache.org>.
github-advanced-security[bot] commented on code in PR #15238:
URL: https://github.com/apache/druid/pull/15238#discussion_r1369478254


##########
extensions-core/s3-extensions/src/test/java/org/apache/druid/storage/s3/S3DataSegmentPullerTest.java:
##########
@@ -165,6 +165,65 @@
     AmazonS3Exception exception = new AmazonS3Exception("S3DataSegmentPullerTest");
     exception.setErrorCode("NoSuchKey");
     exception.setStatusCode(404);
+    EasyMock.expect(s3Client.doesObjectExist(EasyMock.eq(object0.getBucketName()), EasyMock.eq(object0.getKey())))
+            .andReturn(true)
+            .once();
+    EasyMock.expect(s3Client.getObject(EasyMock.eq(bucket), EasyMock.eq(object0.getKey())))
+            .andThrow(exception)
+            .once();
+    S3DataSegmentPuller puller = new S3DataSegmentPuller(s3Client);
+
+    EasyMock.replay(s3Client);
+    Assert.assertThrows(
+        SegmentLoadingException.class,
+        () -> puller.getSegmentFiles(
+            new CloudObjectLocation(
+                bucket,
+                object0.getKey()
+            ), tmpDir
+        )
+    );
+    EasyMock.verify(s3Client);
+
+    File expected = new File(tmpDir, "renames-0");
+    Assert.assertFalse(expected.exists());
+  }
+
+  @Test
+  public void testGZUncompressOn5xxError() throws IOException, SegmentLoadingException
+  {
+    final String bucket = "bucket";
+    final String keyPrefix = "prefix/dir/0";
+    final ServerSideEncryptingAmazonS3 s3Client = EasyMock.createStrictMock(ServerSideEncryptingAmazonS3.class);
+    final byte[] value = bucket.getBytes(StandardCharsets.UTF_8);
+
+    final File tmpFile = temporaryFolder.newFile("gzTest.gz");
+
+    try (OutputStream outputStream = new GZIPOutputStream(new FileOutputStream(tmpFile))) {
+      outputStream.write(value);
+    }
+
+    S3Object object0 = new S3Object();
+
+    object0.setBucketName(bucket);
+    object0.setKey(keyPrefix + "/renames-0.gz");
+    object0.getObjectMetadata().setLastModified(new Date(0));
+    object0.setObjectContent(new FileInputStream(tmpFile));

Review Comment:
   ## Potential input resource leak
   
   This FileInputStream is not always closed on method exit.
   
   [Show more details](https://github.com/apache/druid/security/code-scanning/2078)



##########
extensions-core/s3-extensions/src/test/java/org/apache/druid/storage/s3/S3DataSegmentPullerTest.java:
##########
@@ -165,6 +165,65 @@
     AmazonS3Exception exception = new AmazonS3Exception("S3DataSegmentPullerTest");
     exception.setErrorCode("NoSuchKey");
     exception.setStatusCode(404);
+    EasyMock.expect(s3Client.doesObjectExist(EasyMock.eq(object0.getBucketName()), EasyMock.eq(object0.getKey())))
+            .andReturn(true)
+            .once();
+    EasyMock.expect(s3Client.getObject(EasyMock.eq(bucket), EasyMock.eq(object0.getKey())))
+            .andThrow(exception)
+            .once();
+    S3DataSegmentPuller puller = new S3DataSegmentPuller(s3Client);
+
+    EasyMock.replay(s3Client);
+    Assert.assertThrows(
+        SegmentLoadingException.class,
+        () -> puller.getSegmentFiles(
+            new CloudObjectLocation(
+                bucket,
+                object0.getKey()
+            ), tmpDir
+        )
+    );
+    EasyMock.verify(s3Client);
+
+    File expected = new File(tmpDir, "renames-0");
+    Assert.assertFalse(expected.exists());
+  }
+
+  @Test
+  public void testGZUncompressOn5xxError() throws IOException, SegmentLoadingException
+  {
+    final String bucket = "bucket";
+    final String keyPrefix = "prefix/dir/0";
+    final ServerSideEncryptingAmazonS3 s3Client = EasyMock.createStrictMock(ServerSideEncryptingAmazonS3.class);
+    final byte[] value = bucket.getBytes(StandardCharsets.UTF_8);
+
+    final File tmpFile = temporaryFolder.newFile("gzTest.gz");
+
+    try (OutputStream outputStream = new GZIPOutputStream(new FileOutputStream(tmpFile))) {

Review Comment:
   ## Potential output resource leak
   
   This FileOutputStream is not always closed on method exit.
   
   [Show more details](https://github.com/apache/druid/security/code-scanning/5927)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org