You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Bill Brune <bb...@decarta.com> on 2011/03/30 00:29:55 UTC

namenode wont start

Hi,

I've been running hadoop 0.20.2 for a while now on 2 different clusters 
that I setup. Now on this new cluster I can't get the namenode to stay 
up.  It exits with a IOException "incomplete hdfs uri" and prints the uri:
hdfs://rmsi_combined.rmsi.com:54310  -which looks complete to me.

All the usual suspects are ok, DNS works both ways, passphraseless ssh 
works fine, 54310 is open,  so I don't know what to tell the IT folks to 
fix.   in core-site.xml,  fs.default.name is set to the ip address: 
hdfs://10.2.50.235:54310  (same problem if it is set to the hostname)

The only way to get it to work is to set fs.default.name to 
hdfs://localhost:54310, but that isn't going to let me run a multi-node 
cluster.

The IOException boils up from the uri.getHost() call in 
DistributedFileSystem.initialize() as shown in the attached log snippit.

The exception is thrown if the host returned by getHost() is null.

This has got to be some kind of permission problem somewhere.

Anyone have any ideas where to look?

Thanks!

-Bill



Re: namenode wont start

Posted by Harsh J <qw...@gmail.com>.
On Thu, Mar 31, 2011 at 12:59 AM, Bill Brune <bb...@decarta.com> wrote:
> Thanks for that tidbit, it appears to be the problem...  Maybe that's a well
> known issue? or perhaps it should be added to the setup WIKI  ???

It isn't really a Hadoop issue. See here for what defines a valid
hostname (The behavior of '_' is undefined, and was not part of the
actual RFC spec): http://www.zytrax.com/books/dns/apa/names.html

-- 
Harsh J
http://harshj.com

Re: namenode wont start

Posted by Bill Brune <bb...@decarta.com>.
Thanks for that tidbit, it appears to be the problem...  Maybe that's a 
well known issue? or perhaps it should be added to the setup WIKI  ???

-Bill


On 03/29/2011 09:47 PM, Harsh J wrote:
> On Wed, Mar 30, 2011 at 3:59 AM, Bill Brune<bb...@decarta.com>  wrote:
>> Hi,
>>
>> I've been running hadoop 0.20.2 for a while now on 2 different clusters that
>> I setup. Now on this new cluster I can't get the namenode to stay up.  It
>> exits with a IOException "incomplete hdfs uri" and prints the uri:
>> hdfs://rmsi_combined.rmsi.com:54310  -which looks complete to me.
> Underscores are generally not valid in hostnames.
>


Re: namenode wont start

Posted by Harsh J <qw...@gmail.com>.
On Wed, Mar 30, 2011 at 3:59 AM, Bill Brune <bb...@decarta.com> wrote:
> Hi,
>
> I've been running hadoop 0.20.2 for a while now on 2 different clusters that
> I setup. Now on this new cluster I can't get the namenode to stay up.  It
> exits with a IOException "incomplete hdfs uri" and prints the uri:
> hdfs://rmsi_combined.rmsi.com:54310  -which looks complete to me.

Underscores are generally not valid in hostnames.

-- 
Harsh J
http://harshj.com