You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by "Hiller, Dean (Contractor)" <de...@broadridge.com> on 2011/01/04 02:03:00 UTC

decommision nodes not working(localhost vs. ips in website too)

Luckily I am in dev so not a biggie, but datanode seems to be reading
from /etc/hosts(ie. Java calls to InetAddress.getLocalHost return
127.0.0.1 instead of the ip) when displaying the name of the live nodes.
When displaying the name fo the dead nodes however, it displays the
hostname in my slaves and excluded file.

 

I wonder why the hadoop script doesn't pass the FQDN from the slaves
file to the slave node upon start so there is no lookup of /etc/hosts
AND they could then bind to the correct FQDN as well if it wanted to I
guess.

 

Anyways, my dead node shows up in my live nodes list(as localhost which
it's not but with correct ip) and is not going to a decommissioned
state.  Is there any way to solve this? 

 

I read my /etc/hosts file is supposed to be 127.0.0.1 localhost
localhost.localdomain but to get the hostname to display correctly, I
need something more like 127.0.0.1 <FQDN> <hostname> instead as then I
know it would display properly there....and I may even have to change
the 127.0.0.1 to <ip> as InetAddress.getLocalHost returns whatever is in
/etc/hosts on any linux system I have ever been on (ubuntu, centos at
least).

 

Any way to fix this???

 

thanks,

Dean


This message and any attachments are intended only for the use of the addressee and
may contain information that is privileged and confidential. If the reader of the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.