You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2008/09/16 15:14:44 UTC

[jira] Commented: (HADOOP-3426) Datanode does not start up if the local machines DNS isnt working right and dfs.datanode.dns.interface==default

    [ https://issues.apache.org/jira/browse/HADOOP-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12631387#action_12631387 ] 

Steve Loughran commented on HADOOP-3426:
----------------------------------------

I've now tracked down the root cause of this machine's DNS issues: /etc/hosts had the fully qualified name next to 127.0.0.1 and not the shortname, and no DNS infrastructure, so nslookup of the local hostname was failing. Why this causes java to fail to determine its hostname, I do not know. But it means I may be able to recreate the problem in a vmware image.

> Datanode does not start up if the local machines DNS isnt working right and dfs.datanode.dns.interface==default
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3426
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3426
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.0
>         Environment: Ubuntu 8.04, at home, no reverse DNS
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>         Attachments: hadoop-3426.patch, hadoop-3426.patch
>
>
> This is the third Java project I've been involved in that doesnt work on my home network, due to implementation issues with  java.net.InetAddress.getLocalHost(), issues that only show up on an unamanged network. Fortunately my home network exists to find these problems early.
> In hadoop, if the local hostname doesnt resolve, the datanode does not start up:
> Caused by: java.net.UnknownHostException: k2: k2
> at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
> at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:184)
> at org.apache.hadoop.dfs.DataNode.(DataNode.java:162)
> at org.apache.hadoop.dfs.ExtDataNode.(ExtDataNode.java:55)
> at org.smartfrog.services.hadoop.components.datanode.DatanodeImpl.sfStart(DatanodeImpl.java:60)
> While this is a valid option in a production (non-virtual) cluster, if you are playing with VMWare/Xen private networks or on a home network, you can't rely on DNS. 
> 1. In these situations, its usually better to fall back to using "localhost" or 127.0.0.1 as a hostname if Java can't work it out for itself,
> 2. Its often good to cache this if used in lots of parts of the system, otherwise the 30s timeouts can cause problems of their own.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.