You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2007/03/14 20:31:09 UTC

[jira] Commented: (HADOOP-1117) DFS Scalability: When the namenode is restarted it consumes 80% CPU

    [ https://issues.apache.org/jira/browse/HADOOP-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12480884 ] 

Hairong Kuang commented on HADOOP-1117:
---------------------------------------

The patch looks good. It is better to remove the logging because neededReplication has already done it.

> DFS Scalability: When the namenode is restarted it consumes 80% CPU
> -------------------------------------------------------------------
>
>                 Key: HADOOP-1117
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1117
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.12.0
>            Reporter: dhruba borthakur
>         Assigned To: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.12.1
>
>         Attachments: CpuPendingTransfer2.patch
>
>
> When the namenode is restarted, the datanodes register and each block is inserted into neededReplication. When the namenode exists, safemode it sees starts processing neededReplication. It picks up a block from neededReplication, sees that it has already has the required number of replicas, and continues to the next block in neededReplication. The blocks remain in neededReplication permanentlyhe namenode worker thread to scans this huge list of blocks once every 3 seconds. This consumes plenty of CPU on the namenode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.