You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/07/24 22:18:00 UTC

[jira] [Commented] (HADOOP-15604) Test if the unprocessed items in S3Guard DDB metadata store caused by I/O thresholds

    [ https://issues.apache.org/jira/browse/HADOOP-15604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554858#comment-16554858 ] 

Steve Loughran commented on HADOOP-15604:
-----------------------------------------

IS this happening in the S3A committer?

> Test if the unprocessed items in S3Guard DDB metadata store caused by I/O thresholds
> ------------------------------------------------------------------------------------
>
>                 Key: HADOOP-15604
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15604
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Gabor Bota
>            Assignee: Gabor Bota
>            Priority: Major
>
> When there are ~50 files being committed; each in their own thread from the commit pool; probably the DDB repo is being overloaded just from one single process doing task commit. We should be backing off more, especially given that failing on a write could potentially leave the store inconsistent with the FS (renames, etc)
> It would be nice to have some tests to prove that the I/O thresholds are the reason for unprocessed items in DynamoDB metadata store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org