You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Steve Sapovits <ss...@invitemedia.com> on 2008/02/27 14:15:03 UTC

Local testing and DHCP

When running in Pseudo Distributed mode as outlined in the Quickstart, I see that
the DFS is, at some level, identified by the IP address it was created under. 
   I''m
doing this on a laptop and when I take it to another network, the daemons come
up okay but they can't find the DFS.  It looks like it's because the IP is 
different
from when the DFS was first created.  Is there a way around this so I can run on
the same box and see the same DFS regardless of what its IP is?

-- 
Steve Sapovits
Invite Media  -  http://www.invitemedia.com
ssapovits@invitemedia.com

Re: Local testing and DHCP

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
This should work for DFS. There might a some issues with running MR 
jobs.. not very sure.

Raghu Angadi wrote:
> 
> It is doable. What was the exact config you used? What is the ip address 
> of the DataNodes that shows up on namenode front page when it is running 
> fine?
> 
> I think the trick is to make all the servers bind to localhost interface 
>  (lo on Linux). For. e.g. all datanodes should have 127.0.0.x address.
> 
> Raghu.
> 
> Steve Sapovits wrote:
>>
>> When running in Pseudo Distributed mode as outlined in the Quickstart, 
>> I see that
>> the DFS is, at some level, identified by the IP address it was created 
>> under.   I''m
>> doing this on a laptop and when I take it to another network, the 
>> daemons come
>> up okay but they can't find the DFS.  It looks like it's because the 
>> IP is different
>> from when the DFS was first created.  Is there a way around this so I 
>> can run on
>> the same box and see the same DFS regardless of what its IP is?
>>
> 


Re: Local testing and DHCP

Posted by Steve Sapovits <ss...@invitemedia.com>.
Joydeep Sen Sarma wrote:

> a few of our nodes had (for inexplicable reasons) bound to localhost.localdomain for a while.
> definitely for map-reduce - this cases problems (not sure about hdfs). jobs were failing saying
> they could not find 'localhost.localdomain' (i think this was in the reduce copy phase trying to
> contact map outputs). i am not terribly sure of the details - but there are issues with this ..

I have a situation now, like I've seen before, where my config. is exactly like it was yesterday
but something about my network set-up is different and I can't get the pseudo distributed 
copy to come up at all on the box.  It looks like the name node is out there but the URL tries
going to the dfshealth.jsp page and that fails with a 404 error.

Very frustrating, as I'm something spending hours trying to get a local test set up to install
the same way it did the day before.

-- 
Steve Sapovits
Invite Media  -  http://www.invitemedia.com
ssapovits@invitemedia.com


RE: Local testing and DHCP

Posted by Joydeep Sen Sarma <js...@facebook.com>.
a few of our nodes had (for inexplicable reasons) bound to localhost.localdomain for a while. definitely for map-reduce - this cases problems (not sure about hdfs). jobs were failing saying they could not find 'localhost.localdomain' (i think this was in the reduce copy phase trying to contact map outputs). i am not terribly sure of the details - but there are issues with this ..


-----Original Message-----
From: Raghu Angadi [mailto:rangadi@yahoo-inc.com]
Sent: Wed 2/27/2008 10:36 AM
To: core-user@hadoop.apache.org
Subject: Re: Local testing and DHCP
 

It is doable. What was the exact config you used? What is the ip address 
of the DataNodes that shows up on namenode front page when it is running 
fine?

I think the trick is to make all the servers bind to localhost interface 
  (lo on Linux). For. e.g. all datanodes should have 127.0.0.x address.

Raghu.

Steve Sapovits wrote:
> 
> When running in Pseudo Distributed mode as outlined in the Quickstart, I 
> see that
> the DFS is, at some level, identified by the IP address it was created 
> under.   I''m
> doing this on a laptop and when I take it to another network, the 
> daemons come
> up okay but they can't find the DFS.  It looks like it's because the IP 
> is different
> from when the DFS was first created.  Is there a way around this so I 
> can run on
> the same box and see the same DFS regardless of what its IP is?
> 



Re: Local testing and DHCP

Posted by Steve Sapovits <ss...@invitemedia.com>.
Raghu Angadi wrote:
> 
> It is doable. What was the exact config you used? What is the ip address 
> of the DataNodes that shows up on namenode front page when it is running 
> fine?

When it works, the Name Node shows the current IP address of the
laptop.  So, for example, if I get it set up and running at work, name
node shows the work IP.  Then I go home and can't get to the DFS
but it I start over, format a new DFS, and run, then name node shows
my home IP.  In addition, I have an issue with DNS set ups where
sometimes I can't get my domain name via DNS (hostname -n on Linux).
When that happens, Hadoop fails to even install.  So it appears to have
some dependency on domain name.

In my hadoop-default.xml file, all the IPs I can find are set to zeroes.  Is
zero somehow telling it to use the real IP of the box?  If so, then it would
seem, as you say below, that setting those to 127.0.0.1 would do the
trick ... I can try that easily enough.  Let me know if that's what you were
thinking.  Thanks for the feedback.

> I think the trick is to make all the servers bind to localhost interface 
>  (lo on Linux). For. e.g. all datanodes should have 127.0.0.x address.

-- 
Steve Sapovits
Invite Media  -  http://www.invitemedia.com
ssapovits@invitemedia.com


Re: Local testing and DHCP

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
It is doable. What was the exact config you used? What is the ip address 
of the DataNodes that shows up on namenode front page when it is running 
fine?

I think the trick is to make all the servers bind to localhost interface 
  (lo on Linux). For. e.g. all datanodes should have 127.0.0.x address.

Raghu.

Steve Sapovits wrote:
> 
> When running in Pseudo Distributed mode as outlined in the Quickstart, I 
> see that
> the DFS is, at some level, identified by the IP address it was created 
> under.   I''m
> doing this on a laptop and when I take it to another network, the 
> daemons come
> up okay but they can't find the DFS.  It looks like it's because the IP 
> is different
> from when the DFS was first created.  Is there a way around this so I 
> can run on
> the same box and see the same DFS regardless of what its IP is?
>