You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2020/02/11 11:39:14 UTC

[GitHub] [hadoop] steveloughran commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

steveloughran commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584595196
 
 
   style
   ```
   ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:278:   public static final String BULK_DELETE_PAGE_SIZE =: 'member def modifier' has incorrect indentation level 3, expected level should be 2. [Indentation]
   ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:279:      "fs.s3a.bulk.delete.page.size";: '"fs.s3a.bulk.delete.page.size"' has incorrect indentation level 6, expected level should be 7. [Indentation]
   ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1965:   * with the counter set to the number of keys, rather than the number of invocations: Line is longer than 80 characters (found 86). [LineLength]
   ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1967:   * This is because S3 considers each key as one mutating operation on the store: Line is longer than 80 characters (found 81). [LineLength]
   ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java:50:import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion;:15: Unused import - org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion. [UnusedImports]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:206:   * @return true if the DDB table has prepaid IO and is small enough to throttle.: Line is longer than 80 characters (found 82). [LineLength]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:518:  public void test_999_delete_all_entries() throws Throwable {:15: Name 'test_999_delete_all_entries' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ThrottleTracker.java:113:      LOG.warn("No throttling detected in {} against {}", this, ddbms.toString());: Line is longer than 80 characters (found 82). [LineLength]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:117:  @Parameterized.Parameters(name = "bulk-delete-client-retry={0}-requests={2}-size={1}"): Line is longer than 80 characters (found 88). [LineLength]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:184:  public void test_010_Reset() throws Throwable {:15: Name 'test_010_Reset' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:189:  public void test_020_DeleteThrottling() throws Throwable {:15: Name 'test_020_DeleteThrottling' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
   ./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:202:  public void test_030_Sleep() throws Throwable {:15: Name 'test_030_Sleep' must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org