You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Andy Isaacson (JIRA)" <ji...@apache.org> on 2012/09/13 19:38:07 UTC

[jira] [Created] (HDFS-3934) duplicative dfs_hosts entries handled wrong

Andy Isaacson created HDFS-3934:
-----------------------------------

             Summary: duplicative dfs_hosts entries handled wrong
                 Key: HDFS-3934
                 URL: https://issues.apache.org/jira/browse/HDFS-3934
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 2.0.1-alpha
            Reporter: Andy Isaacson
            Assignee: Andy Isaacson
            Priority: Minor


A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} after the NN restarts because {{getDatanodeListForReport}} does not handle such a "pseudo-duplicate" correctly:
# the "Remove any nodes we know about from the map" loop no longer has the knowledge to remove the spurious entries
# the "The remaining nodes are ones that are referenced by the hosts files" loop does not do hostname lookups, so does not know that the IP and hostname refer to the same host.

Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in the JSP output:  The *Node* column shows ":50010" as the nodename, with HTML markup {{<a href="http://:50075/browseDirectory.jsp?namenodeInfoPort=50070&amp;dir=%2F&amp;nnaddr=172.29.97.196:8020" title="172.29.97.216:50010">:50010</a>}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira