You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2019/06/30 14:59:33 UTC

[GitHub] [hadoop] steveloughran commented on a change in pull request #1037: limit the r/w capacity

steveloughran commented on a change in pull request #1037: limit the r/w capacity 
URL: https://github.com/apache/hadoop/pull/1037#discussion_r298838161
 
 

 ##########
 File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
 ##########
 @@ -55,6 +57,23 @@
   @Rule
   public final Timeout timeout = new Timeout(5 * 60 * 1000);
 
+  protected Configuration createConfiguration() {
+    Configuration conf =  super.createConfiguration();
+    //patch the read/write capacity
+    boolean scaleCapacityLimitEnabled = conf.getBoolean("fs.s3a.s3guard.ddb.table.scale.capacity.limit", true);
+    if(scaleCapacityLimitEnabled) {
+      LOG.info("Enable the capacity limit : {} -> {}, {} -> {}",
+        S3GUARD_DDB_TABLE_CAPACITY_READ_KEY, 1, S3GUARD_DDB_TABLE_CAPACITY_WRITE_KEY,1);
+      conf.set(S3GUARD_DDB_TABLE_CAPACITY_READ_KEY, "1");
+      conf.set(S3GUARD_DDB_TABLE_CAPACITY_WRITE_KEY, "1");
+    }
+    if(scaleCapacityLimitEnabled) {
 
 Review comment:
   we don't need these -they don't show much.
   
   What would be better would be the thread which actually creates a table asking for the bucket metadata and working out the capacity there. But I'm happy not to worry about that...All I'd prefer is to make sure that we don't run up any bills from test tables which weren't deleted

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org