You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/10/30 17:38:01 UTC
[jira] [Created] (HADOOP-15888) ITestDynamoDBMetadataStore can leak
(large) DDB tables in test failures/timeout
Steve Loughran created HADOOP-15888:
---------------------------------------
Summary: ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
Key: HADOOP-15888
URL: https://issues.apache.org/jira/browse/HADOOP-15888
Project: Hadoop Common
Issue Type: Bug
Components: fs/s3, test
Affects Versions: 3.1.2
Reporter: Steve Loughran
This is me doing some backporting of patches from branch-3.2, so it may be an intermediate condition but
# I'd noticed I wasn't actually running ITestDynamoDBMetadataStore
# so I set it up to work with teh right config opts (table and region)
# but the tests were timing out
# looking at DDB tables in the AWS console showed a number of DDB tables "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 write capacity (i.e. ~$50/month)
I haven't replicated this in trunk/branch-3.2 itself, but its clearly dangerous. At the very least, we should have a size of 1 R/W in all creations, so the cost of a test failure is neglible, and then we should document the risk and best practise.
Also: use "s3guard" as the table prefix to make clear its origin
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org