You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mark Kerzner <ma...@gmail.com> on 2011/03/02 23:57:18 UTC

Namenode trying to connect to localhost instead of the name and dying

Hi,

I am doing a pseudo-distributed mode on my laptop, following the same steps
I used for all configurations on my regular cluster, but I get this error

2011-03-02 16:45:13,651 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred ip=/
192.168.1.150 cmd=delete
src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.info dst=null
perm=null
2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).

so it should be connecting to 192.168.1.150, and it is instead connecting to
127.0.1.1 - where does this ip come from?

Thank you,
Mark

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Mark Kerzner <ma...@gmail.com>.
Thank you, Eric, thank you, Bibek.

/etc/hosts was part of the problem, and then after some re-install commands
it just started working :)

Pleasure == working Hadoop cluster (even if it is pseudo-pleasure)

Sincerely,
Mark

On Wed, Mar 2, 2011 at 5:09 PM, Bibek Paudel <et...@gmail.com> wrote:

> On Thu, Mar 3, 2011 at 12:08 AM, Eric Sammer <es...@cloudera.com> wrote:
> > Check your /etc/hosts file and make sure the hostname of the machine is
> not
> > on the loopback device. This is almost always the cause of this.
> >
>
> +1
>
> -b
>
> > On Wed, Mar 2, 2011 at 5:57 PM, Mark Kerzner <ma...@gmail.com>
> wrote:
> >
> >> Hi,
> >>
> >> I am doing a pseudo-distributed mode on my laptop, following the same
> steps
> >> I used for all configurations on my regular cluster, but I get this
> error
> >>
> >> 2011-03-02 16:45:13,651 INFO
> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
> ip=/
> >> 192.168.1.150 cmd=delete
> >>
> src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null
> >> perm=null
> >> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> >> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
> >>
> >> so it should be connecting to 192.168.1.150, and it is instead
> connecting
> >> to
> >> 127.0.1.1 - where does this ip come from?
> >>
> >> Thank you,
> >> Mark
> >>
> >
> >
> >
> > --
> > Eric Sammer
> > twitter: esammer
> > data: www.cloudera.com
> >
>

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Bibek Paudel <et...@gmail.com>.
On Thu, Mar 3, 2011 at 12:08 AM, Eric Sammer <es...@cloudera.com> wrote:
> Check your /etc/hosts file and make sure the hostname of the machine is not
> on the loopback device. This is almost always the cause of this.
>

+1

-b

> On Wed, Mar 2, 2011 at 5:57 PM, Mark Kerzner <ma...@gmail.com> wrote:
>
>> Hi,
>>
>> I am doing a pseudo-distributed mode on my laptop, following the same steps
>> I used for all configurations on my regular cluster, but I get this error
>>
>> 2011-03-02 16:45:13,651 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred ip=/
>> 192.168.1.150 cmd=delete
>> src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null
>> perm=null
>> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
>>
>> so it should be connecting to 192.168.1.150, and it is instead connecting
>> to
>> 127.0.1.1 - where does this ip come from?
>>
>> Thank you,
>> Mark
>>
>
>
>
> --
> Eric Sammer
> twitter: esammer
> data: www.cloudera.com
>

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Eric Sammer <es...@cloudera.com>.
Check your /etc/hosts file and make sure the hostname of the machine is not
on the loopback device. This is almost always the cause of this.

On Wed, Mar 2, 2011 at 5:57 PM, Mark Kerzner <ma...@gmail.com> wrote:

> Hi,
>
> I am doing a pseudo-distributed mode on my laptop, following the same steps
> I used for all configurations on my regular cluster, but I get this error
>
> 2011-03-02 16:45:13,651 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred ip=/
> 192.168.1.150 cmd=delete
> src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null
> perm=null
> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
>
> so it should be connecting to 192.168.1.150, and it is instead connecting
> to
> 127.0.1.1 - where does this ip come from?
>
> Thank you,
> Mark
>



-- 
Eric Sammer
twitter: esammer
data: www.cloudera.com

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Bibek Paudel <et...@gmail.com>.
On Thu, Mar 3, 2011 at 12:01 AM, Mark Kerzner <ma...@gmail.com> wrote:
> It has just one entry
> hadoop-sony
> and
> ping hadoop-sony
> PING hadoop-sony (192.168.1.150) 56(84) bytes of data.
> 64 bytes from ubuntu (192.168.1.150): icmp_req=1 ttl=64 time=0.024 ms

In that case, I think you should check the configuration file where
you have defined the ipc address parameter. That's what the logs are
trying to suggest.

-b

> On Wed, Mar 2, 2011 at 4:59 PM, Bibek Paudel <et...@gmail.com> wrote:
>>
>> On Wed, Mar 2, 2011 at 11:57 PM, Mark Kerzner <ma...@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > I am doing a pseudo-distributed mode on my laptop, following the same
>> > steps
>> > I used for all configurations on my regular cluster, but I get this
>> > error
>> >
>> > 2011-03-02 16:45:13,651 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
>> > ip=/
>> > 192.168.1.150 cmd=delete
>> > src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.info
>> > dst=null
>> > perm=null
>> > 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
>> > connect
>> > to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
>> >
>> > so it should be connecting to 192.168.1.150, and it is instead
>> > connecting to
>> > 127.0.1.1 - where does this ip come from?
>>
>> My first reaction would be to check the conf/slaves file.
>>
>> -b
>>
>> >
>> > Thank you,
>> > Mark
>> >
>
>

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Bibek Paudel <et...@gmail.com>.
On Thu, Mar 3, 2011 at 12:07 AM, Mark Kerzner <ma...@gmail.com> wrote:
> all other daemons are alive, but namenode daemon dying
>

In particular, please check this setting: dfs.datanode.ipc.address

-b

> On Wed, Mar 2, 2011 at 5:01 PM, Mark Kerzner <ma...@gmail.com> wrote:
>>
>> It has just one entry
>> hadoop-sony
>> and
>> ping hadoop-sony
>> PING hadoop-sony (192.168.1.150) 56(84) bytes of data.
>> 64 bytes from ubuntu (192.168.1.150): icmp_req=1 ttl=64 time=0.024 ms
>> On Wed, Mar 2, 2011 at 4:59 PM, Bibek Paudel <et...@gmail.com>
>> wrote:
>>>
>>> On Wed, Mar 2, 2011 at 11:57 PM, Mark Kerzner <ma...@gmail.com>
>>> wrote:
>>> > Hi,
>>> >
>>> > I am doing a pseudo-distributed mode on my laptop, following the same
>>> > steps
>>> > I used for all configurations on my regular cluster, but I get this
>>> > error
>>> >
>>> > 2011-03-02 16:45:13,651 INFO
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
>>> > ip=/
>>> > 192.168.1.150 cmd=delete
>>> > src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.info
>>> > dst=null
>>> > perm=null
>>> > 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
>>> > connect
>>> > to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
>>> >
>>> > so it should be connecting to 192.168.1.150, and it is instead
>>> > connecting to
>>> > 127.0.1.1 - where does this ip come from?
>>>
>>> My first reaction would be to check the conf/slaves file.
>>>
>>> -b
>>>
>>> >
>>> > Thank you,
>>> > Mark
>>> >
>>
>
>

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Mark Kerzner <ma...@gmail.com>.
all other daemons are alive, but namenode daemon dying

On Wed, Mar 2, 2011 at 5:01 PM, Mark Kerzner <ma...@gmail.com> wrote:

> It has just one entry
>
> hadoop-sony
>
> and
>
> ping hadoop-sony
> PING hadoop-sony (192.168.1.150) 56(84) bytes of data.
> 64 bytes from ubuntu (192.168.1.150): icmp_req=1 ttl=64 time=0.024 ms
>
> On Wed, Mar 2, 2011 at 4:59 PM, Bibek Paudel <et...@gmail.com>wrote:
>
>> On Wed, Mar 2, 2011 at 11:57 PM, Mark Kerzner <ma...@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > I am doing a pseudo-distributed mode on my laptop, following the same
>> steps
>> > I used for all configurations on my regular cluster, but I get this
>> error
>> >
>> > 2011-03-02 16:45:13,651 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
>> ip=/
>> > 192.168.1.150 cmd=delete
>> > src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null
>> > perm=null
>> > 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> > to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
>> >
>> > so it should be connecting to 192.168.1.150, and it is instead
>> connecting to
>> > 127.0.1.1 - where does this ip come from?
>>
>> My first reaction would be to check the conf/slaves file.
>>
>> -b
>>
>> >
>> > Thank you,
>> > Mark
>> >
>>
>
>

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Mark Kerzner <ma...@gmail.com>.
It has just one entry

hadoop-sony

and

ping hadoop-sony
PING hadoop-sony (192.168.1.150) 56(84) bytes of data.
64 bytes from ubuntu (192.168.1.150): icmp_req=1 ttl=64 time=0.024 ms

On Wed, Mar 2, 2011 at 4:59 PM, Bibek Paudel <et...@gmail.com> wrote:

> On Wed, Mar 2, 2011 at 11:57 PM, Mark Kerzner <ma...@gmail.com>
> wrote:
> > Hi,
> >
> > I am doing a pseudo-distributed mode on my laptop, following the same
> steps
> > I used for all configurations on my regular cluster, but I get this error
> >
> > 2011-03-02 16:45:13,651 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred
> ip=/
> > 192.168.1.150 cmd=delete
> > src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.infodst=null
> > perm=null
> > 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
> >
> > so it should be connecting to 192.168.1.150, and it is instead connecting
> to
> > 127.0.1.1 - where does this ip come from?
>
> My first reaction would be to check the conf/slaves file.
>
> -b
>
> >
> > Thank you,
> > Mark
> >
>

Re: Namenode trying to connect to localhost instead of the name and dying

Posted by Bibek Paudel <et...@gmail.com>.
On Wed, Mar 2, 2011 at 11:57 PM, Mark Kerzner <ma...@gmail.com> wrote:
> Hi,
>
> I am doing a pseudo-distributed mode on my laptop, following the same steps
> I used for all configurations on my regular cluster, but I get this error
>
> 2011-03-02 16:45:13,651 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=mapred ip=/
> 192.168.1.150 cmd=delete
> src=/var/lib/hadoop-0.20/cache/mapred/mapred/system/jobtracker.info dst=null
> perm=null
> 2011-03-02 16:45:14,524 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu/127.0.1.1:8020. Already tried 0 time(s).
>
> so it should be connecting to 192.168.1.150, and it is instead connecting to
> 127.0.1.1 - where does this ip come from?

My first reaction would be to check the conf/slaves file.

-b

>
> Thank you,
> Mark
>