You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Tsz Wo Nicholas Sze (JIRA)" <ji...@apache.org> on 2015/11/24 23:37:11 UTC
[jira] [Resolved] (HDFS-9434) Recommission a datanode with 500k
blocks may pause NN for 30 seconds
[ https://issues.apache.org/jira/browse/HDFS-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tsz Wo Nicholas Sze resolved HDFS-9434.
---------------------------------------
Resolution: Fixed
Sangjin, thanks for the review.
I have committed the branch-2.6 patch.
> Recommission a datanode with 500k blocks may pause NN for 30 seconds
> --------------------------------------------------------------------
>
> Key: HDFS-9434
> URL: https://issues.apache.org/jira/browse/HDFS-9434
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.6.3
>
> Attachments: h9434_20151116.patch, h9434_20151116_branch-2.6.patch
>
>
> In BlockManager, processOverReplicatedBlocksOnReCommission is called within the namespace lock. There is a (not very useful) log message printed in processOverReplicatedBlock. When there is a large number of blocks stored in a storage, printing the log message for each block can pause NN to process any other operations. We did see that it could pause NN for 30 seconds for a storage with 500k blocks.
> I suggest to change the log message to trace level as a quick fix.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)