You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Xiangna Li <li...@gmail.com> on 2008/06/04 17:54:07 UTC

confusing about decommission in HDFS

hi,

    I try to decommission a node by the following the steps:
      (1) write the hostname of decommission node in a file as the exclude file.
      (2) let the exclude file be specified as a configuration
parameter  dfs.hosts.exclude.
      (3) run "bin/hadoop dfsadmin -refreshNodes".

     It is surprising that the node is found both in the "Live
Datanodes" list with "In Service" status, and also in the "Dead
Datanodes" lists of the dfs namenode web ui. I copy GB-files to the
HDFS to confirm whether it is in Service, and the result is that its
Used size is increasing as others. So could say the decommission
feature don't work ?

     the more strange thing, I put some nodes in the include file and
then add the configuration and then"refreshNodes", but these nodes and
the exclude node are all only in the "Dead Datanodes" lists. Is it a
bug?


    Sincerely for your reply!