You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Hudson (Commented) (JIRA)" <ji...@apache.org> on 2011/12/03 02:15:40 UTC

[jira] [Commented] (HADOOP-3500) decommission node is both in the "Live Datanodes" with "In Service" status, and in the "Dead Datanodes" of the dfs namenode web ui.

    [ https://issues.apache.org/jira/browse/HADOOP-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13161979#comment-13161979 ] 

Hudson commented on HADOOP-3500:
--------------------------------

Integrated in Hadoop-Common-0.23-Commit #249 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/249/])
    Merge -r 1209812:1209813 from trunk to branch. FIXES: HADOOP-3500

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1209815
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java

                
> decommission node is both in the "Live Datanodes" with "In Service" status, and in the "Dead Datanodes" of the dfs namenode web ui.
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3500
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3500
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.17.0
>         Environment: linux-2.6.9
>            Reporter: lixiangna
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> try to decommission a node by the following the steps:
> (1) write the hostname of node which will be decommissioned in a file (the exclude file)
> (2) specified the absolute path of the exclude file as a configuration parameter dfs.hosts.exclude.
> (3) run "bin/hadoop dfsadmin -refreshNodes".
> It is surprising that the node is found both in the "Live Datanodes" with "In Service" status, and in the "Dead Datanodes" of the dfs namenode web ui. When copy new data to the HDFS, its Used size is increasing as other un-decommissioned nodes. Obviously it is in service. Restarting the HDFS or waiting a long time(two day) havn't make the decommission yet.
> the more strange thing, If nodes are configured as the include nodes by similar steps, then these include nodes and
> the exclude node are all only in the "Dead Datanodes" lists. 
> I did many times tests in both 0.17.0 and 0.15.1. The results is same. So i think there maybe bugs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira