You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2007/07/20 01:05:06 UTC

[jira] Commented: (HADOOP-1638) Master node unable to bind to DNS hostname

    [ https://issues.apache.org/jira/browse/HADOOP-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12514038 ] 

Hadoop QA commented on HADOOP-1638:
-----------------------------------

+1

http://issues.apache.org/jira/secure/attachment/12362158/hadoop-1638.patch applied and successfully tested against trunk revision r557790.

Test results:   http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/435/testReport/
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/435/console

> Master node unable to bind to DNS hostname
> ------------------------------------------
>
>                 Key: HADOOP-1638
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1638
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/ec2
>    Affects Versions: 0.13.0, 0.13.1, 0.14.0, 0.15.0
>            Reporter: Stu Hood
>            Priority: Minor
>             Fix For: 0.13.1, 0.14.0, 0.15.0
>
>         Attachments: hadoop-1638.patch
>
>
> With a release package of Hadoop 0.13.0 or with latest SVN, the Hadoop contrib/ec2 scripts fail to start Hadoop correctly. After working around issues HADOOP-1634 and HADOOP-1635, and setting up a DynDNS address pointing to the master's IP, the ec2/bin/start-hadoop script completes.
> But the cluster is unusable because the namenode and tasktracker have not started successfully. Looking at the namenode log on the master reveals the following error:
> #2007-07-19 16:54:53,156 ERROR org.apache.hadoop.dfs.NameNode: java.net.BindException: Cannot assign requested address
> #        at sun.nio.ch.Net.bind(Native Method)
> #        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
> #        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> #        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:186)
> #        at org.apache.hadoop.ipc.Server.<init>(Server.java:631)
> #        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:325)
> #        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:295)
> #        at org.apache.hadoop.dfs.NameNode.init(NameNode.java:164)
> #        at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:211)
> #        at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:803)
> #        at org.apache.hadoop.dfs.NameNode.main(NameNode.java:811)
> The master node refuses to bind to the DynDNS hostname in the generated hadoop-site.xml. Here is the relevant part of the generated file:
> #<property>
> #  <name>fs.default.name</name>
> #  <value>blah-ec2.gotdns.org:50001</value>
> #</property>
> #
> #<property>
> #  <name>mapred.job.tracker</name>
> #  <value>blah-ec2.gotdns.org:50002</value>
> #</property>
> I'll attach a patch against hadoop-trunk that fixes the issue for me, but I'm not sure if this issue is something that someone can fix more thoroughly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.