You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Mark Gui (Jira)" <ji...@apache.org> on 2021/07/08 07:43:00 UTC

[jira] [Updated] (HDDS-5413) Limit num of containers to process per round for ReplicationManager.

     [ https://issues.apache.org/jira/browse/HDDS-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mark Gui updated HDDS-5413:
---------------------------
    Description: 
For now, ReplicationManager process all containers at once, this will potentially bring healthy load to datanodes if there are a lot of containers to be replicated/deleted/closed.

So it is nice to have a bound for each round, HDFS has similar settings, and this issue tries to implement sth like 'dfs.block.misreplication.processing.limit: 10000' in HDFS.

 

This is just a limit on the number of containers to be processed, note that ReplicationManager count each container as processed no mater it is under replicated or over replicated or good

  was:
For now, ReplicationManager process all containers at once, this will potentially bring healthy load to datanodes if there are a lot of containers to be replicated/deleted/closed.

So it is nice to have a bound for each round, HDFS has similar settings, and this issue tries to implement sth like 'dfs.block.misreplication.processing.limit: 10000' in HDFS.

HDFS has 128MB block by default while Ozone has 5GB container by default, so

128MB * 10000 / (5 * 1024) = 250 should be reasonable.


> Limit num of containers to process per round for ReplicationManager.
> --------------------------------------------------------------------
>
>                 Key: HDDS-5413
>                 URL: https://issues.apache.org/jira/browse/HDDS-5413
>             Project: Apache Ozone
>          Issue Type: Improvement
>            Reporter: Mark Gui
>            Assignee: Mark Gui
>            Priority: Major
>
> For now, ReplicationManager process all containers at once, this will potentially bring healthy load to datanodes if there are a lot of containers to be replicated/deleted/closed.
> So it is nice to have a bound for each round, HDFS has similar settings, and this issue tries to implement sth like 'dfs.block.misreplication.processing.limit: 10000' in HDFS.
>  
> This is just a limit on the number of containers to be processed, note that ReplicationManager count each container as processed no mater it is under replicated or over replicated or good



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org