You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/10/09 15:23:00 UTC
[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB
batch retries
[ https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16643618#comment-16643618 ]
Steve Loughran commented on HADOOP-15834:
-----------------------------------------
Also, switching to dynamic capacity seems to trigger periods when the DDB table isn't active any more
{code}
ERROR] Tests run: 68, Failures: 0, Errors: 1, Skipped: 4, Time elapsed: 429.837 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
[ERROR] testCreateFlagAppendNonExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations) Time elapsed: 127.843 s <<< ERROR!
java.lang.RuntimeException: java.io.IOException: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state.
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:464)
at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileContext(S3ATestUtils.java:218)
at org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations.setUp(ITestS3AFileContextMainOperations.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
Caused by: java.io.IOException: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state.
at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:114)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:378)
at org.apache.hadoop.fs.DelegateToFileSystem.<init>(DelegateToFileSystem.java:52)
at org.apache.hadoop.fs.s3a.S3A.<init>(S3A.java:40)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:135)
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:173)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:258)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:336)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:333)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:459)
... 28 more
Caused by: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state.
at com.amazonaws.services.dynamodbv2.document.Table.waitForActive(Table.java:489)
at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.waitForTableActive(DynamoDBMetadataStore.java:1324)
at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1185)
at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:356)
at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
... 45 more
Caused by: com.amazonaws.waiters.WaiterTimedOutException: Reached maximum attempts without transitioning to the desired state
at com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:86)
at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88)
at com.amazonaws.services.dynamodbv2.document.Table.waitForActive(Table.java:482)
... 49 more
{code}
> Improve throttling on S3Guard DDB batch retries
> -----------------------------------------------
>
> Key: HADOOP-15834
> URL: https://issues.apache.org/jira/browse/HADOOP-15834
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Priority: Major
>
> the batch throttling may fail too fast
> if there's batch update of 25 writes but the default retry count is nine attempts, only nine writes of the batch may be attempted...even if each attempt is actually successfully writing data.
> In contrast, a single write of a piece of data gets the same no. of attempts, so 25 individual writes can handle a lot more throttling than a bulk write.
> Proposed: retry logic to be more forgiving of batch writes, such as not consider a batch call where at least one data item was written to count as a failure
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org