You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ben Kucinich <be...@gmail.com> on 2008/02/08 17:28:47 UTC

URLs contain non-existant domain names in machines.jsp

I have a Hadoop running on a master node 192.168.1.8. fs.default.name
is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.

I am accessing it's web pages on port 50030 from another machine. I
visited http://192.168.101.8:50030/machines.jsp. It showed:-

Name	Host	# running tasks	Failures	Seconds since heartbeat
tracker_hadoop.domain.example.com:/127.0.0.1:4545	hadoop.domain.example.com	0	0	9

Now, when I click on
tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
http://hadoop.domain.example.com:50060/. But there is no DNS entry for
hadoop in our DNS server. So, I get error in browser. "hadoop" is just
the locally set name in the master node. From my machine I can't
access the master node as "hadoop". I have to access it as IP address
192.168.101.8. So, this link fails. Is there a way I can set it so
that, it doesn't use names but only IP address in forming this link?

Re: URLs contain non-existant domain names in machines.jsp

Posted by Erik Hetzner <er...@ucop.edu>.
At Sun, 10 Feb 2008 16:25:37 +0000,
Tim Wintle <ti...@teamrubber.com> wrote:
> 
> I agree, this is a really annoying problem - most of the job appears to
> work, but unfortunately the reduce stage doensn't normally work.
> 
> Interestingly, when hadoop runs on OSX it seems to set the hostname as
> the ip (or sets a hostname through zeroconfig). Would be useful if we
> could use just ip address, though (especially for "dynamic" clusters
> where machines are being added / removed fairly often)

This patch worked for me some time ago when I was running on machines
with non-resolving hostnames. It compiles with hadoop trunk but I
haven’t tested it.

best,
Erik Hetzner


Re: URLs contain non-existant domain names in machines.jsp

Posted by Tim Wintle <ti...@teamrubber.com>.
I agree, this is a really annoying problem - most of the job appears to
work, but unfortunately the reduce stage doensn't normally work.

Interestingly, when hadoop runs on OSX it seems to set the hostname as
the ip (or sets a hostname through zeroconfig). Would be useful if we
could use just ip address, though (especially for "dynamic" clusters
where machines are being added / removed fairly often)


On Sat, 2008-02-09 at 21:11 +0530, Ben Kucinich wrote:
> I made a small mistake describing my problem. There is no 192.168.1.8.
> There is only one machine, 192.168.101.8. I'll describe my problem
> again.
> 
> 1. I have set up a single-node cluster on 192.168.101.8. It is an Ubuntu server.
> 
> 2. There is no entry for 192.168.101.8 in the DNS server. However, the
> hostname is set to be hadoop in this server. But this is only local.
> If I ping hadoop locally, it works. But if I ping hadoop or ping
> hadoop.domain.example.com from another system it doesn't work. From
> another system I have to ping 192.168.101.8. So, I hope I have made it
> clear that hadoop.domain.example.com does not exist in our DNS server.
> 
> 3. domain.example.com is only a dummy example. Of course the actual
> name is the domain name of our organization.
> 
> 4. I started hadoop on this server with the command, bin/hadoop
> namenode -format; bin/start-all.sh
> 
> 5. jps showed all the processes started successfully.
> 
> 6. Here is my hadoop-site.xml
> 
> <configuration>
> 
> <property>
>   <name>fs.default.name</name>
>   <value>192.168.101.8:9000</value>
>   <description></description>
> </property>
> 
> <property>
>   <name>mapred.job.tracker</name>
>   <value>192.168.101.8:9001</value>
>   <description></description>
> </property>
> 
> <property>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description></description>
> </property>
> 
> </configuration>
> 
> 7. I am running a few ready examples present in
> hadoop-0.15.3-examples.jar, especially, the wordcount one. I am also
> putting some files into the DFS from remote systems, such as,
> 192.168.101.100, 192.168.101.101, etc. But these remote systems are
> not slaves.
> 
> 8. From a remote system, I try to access:-
> http://192.168.101.8:50030/machines.jsp
> 
> It showed:-
> 
> Name  Host    # running tasks Failures        Seconds since heartbeat
> tracker_hadoop.domain.example.com:/127.0.0.1:4545
> hadoop.domain.example.com       0       0       9
> 
> Now, when I click on
> tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
> http://hadoop.domain.example.com:50060/. But it gives error in the
> browser because of reason mentioned in point 2. I don't want it to use
> the hostname to form those links. I want it to use the IP address,
> 192.168.101.8 to form the links. Is it possible?
> 
> On Feb 9, 2008 7:49 PM, Amar Kamat <am...@yahoo-inc.com> wrote:
> > Ben Kucinich wrote:
> > > I have a Hadoop running on a master node 192.168.1.8. fs.default.name
> > > is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.
> > >
> > >
> > Actually the masters are the nodes where the JobTracker and the NameNode
> > are running i.e 192.168.101.8 in your case.
> > 192.168.1.8 would be your client node, the node from where the jobs are
> > submitted.
> > > I am accessing it's web pages on port 50030 from another machine. I
> > > visited http://192.168.101.8:50030/machines.jsp. It showed:-
> > >
> > > Name  Host    # running tasks Failures        Seconds since heartbeat
> > > tracker_hadoop.domain.example.com:/127.0.0.1:4545     hadoop.domain.example.com       0       0       9
> > >
> > The tacker-name is tracker_<tracker-hostname:port> where hostname is
> > obtained from the DNS nameserver passed by
> > 'mapred.tasktracker.dns.nameserver' in conf/hadoop-default.xml. So I
> > guess in your case "hadoop.domain.example.com"
> > is the name obtained from the DNS nameserver for that node. Can you
> > provide more details on the xml parameters you have
> > changed in conf directory. Also can you provide more details on how you
> > are starting your hadoop.
> > Amar
> >
> > > Now, when I click on
> > > tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
> > > http://hadoop.domain.example.com:50060/. But there is no DNS entry for
> > > hadoop in our DNS server. So, I get error in browser. "hadoop" is just
> > > the locally set name in the master node. From my machine I can't
> > > access the master node as "hadoop". I have to access it as IP address
> > > 192.168.101.8. So, this link fails. Is there a way I can set it so
> > > that, it doesn't use names but only IP address in forming this link?
> > >
> >
> >


Re: URLs contain non-existant domain names in machines.jsp

Posted by Allen Wittenauer <aw...@yahoo-inc.com>.
On 2/9/08 7:41 AM, "Ben Kucinich" <be...@gmail.com> wrote:
> I don't want it to use
> the hostname to form those links. I want it to use the IP address,
> 192.168.101.8 to form the links. Is it possible?

    I'm fairly certain the answer is no.  You need to have working hostname
resolution.



Re: URLs contain non-existant domain names in machines.jsp

Posted by Ben Kucinich <be...@gmail.com>.
I made a small mistake describing my problem. There is no 192.168.1.8.
There is only one machine, 192.168.101.8. I'll describe my problem
again.

1. I have set up a single-node cluster on 192.168.101.8. It is an Ubuntu server.

2. There is no entry for 192.168.101.8 in the DNS server. However, the
hostname is set to be hadoop in this server. But this is only local.
If I ping hadoop locally, it works. But if I ping hadoop or ping
hadoop.domain.example.com from another system it doesn't work. From
another system I have to ping 192.168.101.8. So, I hope I have made it
clear that hadoop.domain.example.com does not exist in our DNS server.

3. domain.example.com is only a dummy example. Of course the actual
name is the domain name of our organization.

4. I started hadoop on this server with the command, bin/hadoop
namenode -format; bin/start-all.sh

5. jps showed all the processes started successfully.

6. Here is my hadoop-site.xml

<configuration>

<property>
  <name>fs.default.name</name>
  <value>192.168.101.8:9000</value>
  <description></description>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>192.168.101.8:9001</value>
  <description></description>
</property>

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description></description>
</property>

</configuration>

7. I am running a few ready examples present in
hadoop-0.15.3-examples.jar, especially, the wordcount one. I am also
putting some files into the DFS from remote systems, such as,
192.168.101.100, 192.168.101.101, etc. But these remote systems are
not slaves.

8. From a remote system, I try to access:-
http://192.168.101.8:50030/machines.jsp

It showed:-

Name  Host    # running tasks Failures        Seconds since heartbeat
tracker_hadoop.domain.example.com:/127.0.0.1:4545
hadoop.domain.example.com       0       0       9

Now, when I click on
tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
http://hadoop.domain.example.com:50060/. But it gives error in the
browser because of reason mentioned in point 2. I don't want it to use
the hostname to form those links. I want it to use the IP address,
192.168.101.8 to form the links. Is it possible?

On Feb 9, 2008 7:49 PM, Amar Kamat <am...@yahoo-inc.com> wrote:
> Ben Kucinich wrote:
> > I have a Hadoop running on a master node 192.168.1.8. fs.default.name
> > is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.
> >
> >
> Actually the masters are the nodes where the JobTracker and the NameNode
> are running i.e 192.168.101.8 in your case.
> 192.168.1.8 would be your client node, the node from where the jobs are
> submitted.
> > I am accessing it's web pages on port 50030 from another machine. I
> > visited http://192.168.101.8:50030/machines.jsp. It showed:-
> >
> > Name  Host    # running tasks Failures        Seconds since heartbeat
> > tracker_hadoop.domain.example.com:/127.0.0.1:4545     hadoop.domain.example.com       0       0       9
> >
> The tacker-name is tracker_<tracker-hostname:port> where hostname is
> obtained from the DNS nameserver passed by
> 'mapred.tasktracker.dns.nameserver' in conf/hadoop-default.xml. So I
> guess in your case "hadoop.domain.example.com"
> is the name obtained from the DNS nameserver for that node. Can you
> provide more details on the xml parameters you have
> changed in conf directory. Also can you provide more details on how you
> are starting your hadoop.
> Amar
>
> > Now, when I click on
> > tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
> > http://hadoop.domain.example.com:50060/. But there is no DNS entry for
> > hadoop in our DNS server. So, I get error in browser. "hadoop" is just
> > the locally set name in the master node. From my machine I can't
> > access the master node as "hadoop". I have to access it as IP address
> > 192.168.101.8. So, this link fails. Is there a way I can set it so
> > that, it doesn't use names but only IP address in forming this link?
> >
>
>

Re: URLs contain non-existant domain names in machines.jsp

Posted by Amar Kamat <am...@yahoo-inc.com>.
Ben Kucinich wrote:
> I have a Hadoop running on a master node 192.168.1.8. fs.default.name
> is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.
>
>   
Actually the masters are the nodes where the JobTracker and the NameNode 
are running i.e 192.168.101.8 in your case.
192.168.1.8 would be your client node, the node from where the jobs are 
submitted. 
> I am accessing it's web pages on port 50030 from another machine. I
> visited http://192.168.101.8:50030/machines.jsp. It showed:-
>
> Name	Host	# running tasks	Failures	Seconds since heartbeat
> tracker_hadoop.domain.example.com:/127.0.0.1:4545	hadoop.domain.example.com	0	0	9
>   
The tacker-name is tracker_<tracker-hostname:port> where hostname is 
obtained from the DNS nameserver passed by 
'mapred.tasktracker.dns.nameserver' in conf/hadoop-default.xml. So I 
guess in your case "hadoop.domain.example.com"
is the name obtained from the DNS nameserver for that node. Can you 
provide more details on the xml parameters you have
changed in conf directory. Also can you provide more details on how you 
are starting your hadoop.
Amar
> Now, when I click on
> tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
> http://hadoop.domain.example.com:50060/. But there is no DNS entry for
> hadoop in our DNS server. So, I get error in browser. "hadoop" is just
> the locally set name in the master node. From my machine I can't
> access the master node as "hadoop". I have to access it as IP address
> 192.168.101.8. So, this link fails. Is there a way I can set it so
> that, it doesn't use names but only IP address in forming this link?
>