You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Wei-Chiu Chuang (Jira)" <ji...@apache.org> on 2022/03/16 02:14:00 UTC

[jira] [Resolved] (HDFS-16502) Reconfigure Block Invalidate limit

     [ https://issues.apache.org/jira/browse/HDFS-16502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wei-Chiu Chuang resolved HDFS-16502.
------------------------------------
    Fix Version/s: 3.4.0
                   3.3.3
       Resolution: Fixed

> Reconfigure Block Invalidate limit
> ----------------------------------
>
>                 Key: HDFS-16502
>                 URL: https://issues.apache.org/jira/browse/HDFS-16502
>             Project: Hadoop HDFS
>          Issue Type: Task
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0, 3.3.3
>
>          Time Spent: 2h
>  Remaining Estimate: 0h
>
> Based on the cluster load, it would be helpful to consider tuning block invalidate limit (dfs.block.invalidate.limit). The only way we can do this without restarting Namenode as of today is by reconfiguring heartbeat interval 
> {code:java}
> Math.max(heartbeatInt*20, blockInvalidateLimit){code}
> , this logic is not straightforward and operators are usually not aware of it (lack of documentation), also updating heartbeat interval is not desired in all the cases.
> We should provide the ability to alter block invalidation limit without affecting heartbeat interval on the live cluster to adjust some load at Datanode level.
> We should also take this opportunity to keep (heartbeatInterval * 20) computation logic in a common method.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org