You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Keith Wiley <kw...@keithwiley.com> on 2013/02/19 00:00:26 UTC

Namenode formatting problem

This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:

2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
	at org.apache.hadoop.ipc.Server.bind(Server.java:356)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
	at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
************************************************************/

No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:

ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

My /etc/hosts looks like this:
127.0.0.1   localhost localhost.localdomain CLIENT_HOST
MASTER_IP MASTER_HOST master
SLAVE_IP SLAVE_HOST slave01

This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.

Any ideas?

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"The easy confidence with which I know another man's religion is folly teaches
me to suspect that my own is also."
                                           --  Mark Twain
________________________________________________________________________________


RE: Namenode formatting problem

Posted by Marcin Mejran <ma...@hooklogic.com>.
The issue may be that the nodes are trying to use the ec2 public ip (which would be used for external access) to access each other which does not work (or doesn't work trivially). You need to use the private ips which are given by ifconfig.

ec2 gives you static ips as long as you don't restart or stop/start an instance.

That said, it gives you TWO ips and you need to be careful on which you use:
* Private ip: This is a local ip that cannot be access from outside ec2 but can be used to communicate between instance. This is what ifconfig returns.
* Public ip: This ip can be used for external access but it's not shown in ifconfig.
You can imagine each instance as having its own personal NAT.

Instances should use the private ip when communicating to each other. I'm not sure if the public ip cannot be used or is just a giant pain to setup correctly. I just did a test on my company ec2 instances and I can ping the public ip from an instance but I cannot use it for ssh. Not sure why off hand although I believe any data sent to a public ip (even from within ec2) gets charged transfer fees so it's not a good idea in any regard.

-Marcin

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Tuesday, February 19, 2013 11:25 AM
To: <us...@hadoop.apache.org>
Subject: Re: Namenode formatting problem

To simplify my previous post, your IPs for the master/slave/etc. in /etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to 
> check), is not what is your local IP on that machine (or rather, it 
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the 
> assigned IP (checkable via ifconfig) would probably explain the 
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL 
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in 
>> namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at 
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12
>> 04)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
>> with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
>> connection exception: java.net.ConnectException: Connection refused; 
>> For more details see:  
>> http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is 
>> folly teaches me to suspect that my own is also."
>>                                            --  Mark Twain 
>> _____________________________________________________________________
>> ___________
>>
>
>
>
> --
> Harsh J



--
Harsh J



RE: Namenode formatting problem

Posted by Marcin Mejran <ma...@hooklogic.com>.
The issue may be that the nodes are trying to use the ec2 public ip (which would be used for external access) to access each other which does not work (or doesn't work trivially). You need to use the private ips which are given by ifconfig.

ec2 gives you static ips as long as you don't restart or stop/start an instance.

That said, it gives you TWO ips and you need to be careful on which you use:
* Private ip: This is a local ip that cannot be access from outside ec2 but can be used to communicate between instance. This is what ifconfig returns.
* Public ip: This ip can be used for external access but it's not shown in ifconfig.
You can imagine each instance as having its own personal NAT.

Instances should use the private ip when communicating to each other. I'm not sure if the public ip cannot be used or is just a giant pain to setup correctly. I just did a test on my company ec2 instances and I can ping the public ip from an instance but I cannot use it for ssh. Not sure why off hand although I believe any data sent to a public ip (even from within ec2) gets charged transfer fees so it's not a good idea in any regard.

-Marcin

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Tuesday, February 19, 2013 11:25 AM
To: <us...@hadoop.apache.org>
Subject: Re: Namenode formatting problem

To simplify my previous post, your IPs for the master/slave/etc. in /etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to 
> check), is not what is your local IP on that machine (or rather, it 
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the 
> assigned IP (checkable via ifconfig) would probably explain the 
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL 
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in 
>> namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at 
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12
>> 04)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
>> with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
>> connection exception: java.net.ConnectException: Connection refused; 
>> For more details see:  
>> http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is 
>> folly teaches me to suspect that my own is also."
>>                                            --  Mark Twain 
>> _____________________________________________________________________
>> ___________
>>
>
>
>
> --
> Harsh J



--
Harsh J



RE: Namenode formatting problem

Posted by Marcin Mejran <ma...@hooklogic.com>.
The issue may be that the nodes are trying to use the ec2 public ip (which would be used for external access) to access each other which does not work (or doesn't work trivially). You need to use the private ips which are given by ifconfig.

ec2 gives you static ips as long as you don't restart or stop/start an instance.

That said, it gives you TWO ips and you need to be careful on which you use:
* Private ip: This is a local ip that cannot be access from outside ec2 but can be used to communicate between instance. This is what ifconfig returns.
* Public ip: This ip can be used for external access but it's not shown in ifconfig.
You can imagine each instance as having its own personal NAT.

Instances should use the private ip when communicating to each other. I'm not sure if the public ip cannot be used or is just a giant pain to setup correctly. I just did a test on my company ec2 instances and I can ping the public ip from an instance but I cannot use it for ssh. Not sure why off hand although I believe any data sent to a public ip (even from within ec2) gets charged transfer fees so it's not a good idea in any regard.

-Marcin

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Tuesday, February 19, 2013 11:25 AM
To: <us...@hadoop.apache.org>
Subject: Re: Namenode formatting problem

To simplify my previous post, your IPs for the master/slave/etc. in /etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to 
> check), is not what is your local IP on that machine (or rather, it 
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the 
> assigned IP (checkable via ifconfig) would probably explain the 
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL 
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in 
>> namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at 
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12
>> 04)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
>> with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
>> connection exception: java.net.ConnectException: Connection refused; 
>> For more details see:  
>> http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is 
>> folly teaches me to suspect that my own is also."
>>                                            --  Mark Twain 
>> _____________________________________________________________________
>> ___________
>>
>
>
>
> --
> Harsh J



--
Harsh J



RE: Namenode formatting problem

Posted by Marcin Mejran <ma...@hooklogic.com>.
The issue may be that the nodes are trying to use the ec2 public ip (which would be used for external access) to access each other which does not work (or doesn't work trivially). You need to use the private ips which are given by ifconfig.

ec2 gives you static ips as long as you don't restart or stop/start an instance.

That said, it gives you TWO ips and you need to be careful on which you use:
* Private ip: This is a local ip that cannot be access from outside ec2 but can be used to communicate between instance. This is what ifconfig returns.
* Public ip: This ip can be used for external access but it's not shown in ifconfig.
You can imagine each instance as having its own personal NAT.

Instances should use the private ip when communicating to each other. I'm not sure if the public ip cannot be used or is just a giant pain to setup correctly. I just did a test on my company ec2 instances and I can ping the public ip from an instance but I cannot use it for ssh. Not sure why off hand although I believe any data sent to a public ip (even from within ec2) gets charged transfer fees so it's not a good idea in any regard.

-Marcin

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Tuesday, February 19, 2013 11:25 AM
To: <us...@hadoop.apache.org>
Subject: Re: Namenode formatting problem

To simplify my previous post, your IPs for the master/slave/etc. in /etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to 
> check), is not what is your local IP on that machine (or rather, it 
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the 
> assigned IP (checkable via ifconfig) would probably explain the 
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL 
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in 
>> namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at 
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:12
>> 04)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
>> with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
>> connection exception: java.net.ConnectException: Connection refused; 
>> For more details see:  
>> http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is 
>> folly teaches me to suspect that my own is also."
>>                                            --  Mark Twain 
>> _____________________________________________________________________
>> ___________
>>
>
>
>
> --
> Harsh J



--
Harsh J



Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
To simplify my previous post, your IPs for the master/slave/etc. in
/etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to
think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
> check), is not what is your local IP on that machine (or rather, it
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the
> assigned IP (checkable via ifconfig) would probably explain the
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is folly teaches
>> me to suspect that my own is also."
>>                                            --  Mark Twain
>> ________________________________________________________________________________
>>
>
>
>
> --
> Harsh J



--
Harsh J

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
To simplify my previous post, your IPs for the master/slave/etc. in
/etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to
think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
> check), is not what is your local IP on that machine (or rather, it
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the
> assigned IP (checkable via ifconfig) would probably explain the
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is folly teaches
>> me to suspect that my own is also."
>>                                            --  Mark Twain
>> ________________________________________________________________________________
>>
>
>
>
> --
> Harsh J



--
Harsh J

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
To simplify my previous post, your IPs for the master/slave/etc. in
/etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to
think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
> check), is not what is your local IP on that machine (or rather, it
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the
> assigned IP (checkable via ifconfig) would probably explain the
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is folly teaches
>> me to suspect that my own is also."
>>                                            --  Mark Twain
>> ________________________________________________________________________________
>>
>
>
>
> --
> Harsh J



--
Harsh J

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
To simplify my previous post, your IPs for the master/slave/etc. in
/etc/hosts file should match the ones reported by "ifconfig" always.
In proper deployments, IP is static. If IP is dynamic, we'll need to
think of some different ways.

On Tue, Feb 19, 2013 at 9:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Hey Keith,
>
> I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
> check), is not what is your local IP on that machine (or rather, it
> isn't the machine you intended to start it on)?
>
> Not sure if EC2 grants static IPs, but otherwise a change in the
> assigned IP (checkable via ifconfig) would probably explain the
> "Cannot assign" error received when we tried a bind() syscall.
>
> On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
>> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>>
>> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
>> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
>> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
>> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
>> ************************************************************/
>>
>> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>>
>> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>> My /etc/hosts looks like this:
>> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
>> MASTER_IP MASTER_HOST master
>> SLAVE_IP SLAVE_HOST slave01
>>
>> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>>
>> Any ideas?
>>
>> ________________________________________________________________________________
>> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>>
>> "The easy confidence with which I know another man's religion is folly teaches
>> me to suspect that my own is also."
>>                                            --  Mark Twain
>> ________________________________________________________________________________
>>
>
>
>
> --
> Harsh J



--
Harsh J

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
Hey Keith,

I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
check), is not what is your local IP on that machine (or rather, it
isn't the machine you intended to start it on)?

Not sure if EC2 grants static IPs, but otherwise a change in the
assigned IP (checkable via ifconfig) would probably explain the
"Cannot assign" error received when we tried a bind() syscall.

On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
> ________________________________________________________________________________
>



--
Harsh J

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
Hey Keith,

I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
check), is not what is your local IP on that machine (or rather, it
isn't the machine you intended to start it on)?

Not sure if EC2 grants static IPs, but otherwise a change in the
assigned IP (checkable via ifconfig) would probably explain the
"Cannot assign" error received when we tried a bind() syscall.

On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
> ________________________________________________________________________________
>



--
Harsh J

RE: Namenode formatting problem

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Keith,

When you run the format command on the namenode machine it actually starts
the namenode, formats it then shuts it down (see:
http://hadoop.apache.org/docs/stable/commands_manual.html). Before you run
the format command do you see any processes already listening on port 9212
via netstat -anlp | grep 9212 on the namenode? 

As per the recommendations on the link in the error message
(http://wiki.apache.org/hadoop/BindException) you could try changing the
port used by the namenode

I'm not familiar with deploying Hadoop on EC2 so I'm not sure if this is
different for EC2 deployments, however, the namenode usually listens on port
8020 for file system metadata operations so I guess you specified a
different port in the fs.default.parameter hdfs-site.xml?

Vijay

-----Original Message-----
From: Keith Wiley [mailto:kwiley@keithwiley.com] 
Sent: 19 February 2013 15:10
To: user@hadoop.apache.org
Subject: Re: Namenode formatting problem

Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with
Hadoop 2.0 MR1 (which I think should behave almost exactly like older
versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
journal nodes?  I'll try to read up more on this later today.  Thanks for
the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need
to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode 
> join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java
:375)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350
)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcS
erver.java:238)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.jav
a:452)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434
)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1140)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:120
> 4)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> 2013-02-18 22:19:46,990 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the
namenode to start any processes, only starting the namenode or datanode
should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
> connection exception: java.net.ConnectException: Connection refused; 
> For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the
security group has full inbound access.  I can ssh between all three
machines (client/master/slave) without a password ala authorized_keys.  I
can ping the master node from the client machine (although I don't know how
to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

____________________________________________________________________________
____
Keith Wiley     kwiley@keithwiley.com     keithwiley.com
music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio
than when I entered."
                                           --  Keith Wiley
____________________________________________________________________________
____



Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
I want to update my answer, if you don't configure QJM HA in your
hadoop-2.0.3, then just ignore my reply. Thanks.


On Tue, Feb 19, 2013 at 11:09 PM, Keith Wiley <kw...@keithwiley.com> wrote:

> Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it
> with Hadoop 2.0 MR1 (which I think should behave almost exactly like older
> versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
> journal nodes?  I'll try to read up more on this later today.  Thanks for
> the tip.
>
> On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:
>
> > Because journal nodes are also be formated during NN format, so you need
> to start all JN daemons firstly.
> >
> > On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> > This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
> >
> > 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> > java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
> >         at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> > 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> > 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> > ************************************************************/
> >
> > No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
> >
> > ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >
> > My /etc/hosts looks like this:
> > 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> > MASTER_IP MASTER_HOST master
> > SLAVE_IP SLAVE_HOST slave01
> >
> > This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
> >
> > Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "What I primarily learned in grad school is how much I *don't* know.
> Consequently, I left grad school with a higher ignorance to knowledge
> ratio than
> when I entered."
>                                            --  Keith Wiley
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
I want to update my answer, if you don't configure QJM HA in your
hadoop-2.0.3, then just ignore my reply. Thanks.


On Tue, Feb 19, 2013 at 11:09 PM, Keith Wiley <kw...@keithwiley.com> wrote:

> Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it
> with Hadoop 2.0 MR1 (which I think should behave almost exactly like older
> versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
> journal nodes?  I'll try to read up more on this later today.  Thanks for
> the tip.
>
> On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:
>
> > Because journal nodes are also be formated during NN format, so you need
> to start all JN daemons firstly.
> >
> > On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> > This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
> >
> > 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> > java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
> >         at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> > 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> > 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> > ************************************************************/
> >
> > No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
> >
> > ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >
> > My /etc/hosts looks like this:
> > 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> > MASTER_IP MASTER_HOST master
> > SLAVE_IP SLAVE_HOST slave01
> >
> > This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
> >
> > Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "What I primarily learned in grad school is how much I *don't* know.
> Consequently, I left grad school with a higher ignorance to knowledge
> ratio than
> when I entered."
>                                            --  Keith Wiley
>
> ________________________________________________________________________________
>
>

RE: Namenode formatting problem

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Keith,

When you run the format command on the namenode machine it actually starts
the namenode, formats it then shuts it down (see:
http://hadoop.apache.org/docs/stable/commands_manual.html). Before you run
the format command do you see any processes already listening on port 9212
via netstat -anlp | grep 9212 on the namenode? 

As per the recommendations on the link in the error message
(http://wiki.apache.org/hadoop/BindException) you could try changing the
port used by the namenode

I'm not familiar with deploying Hadoop on EC2 so I'm not sure if this is
different for EC2 deployments, however, the namenode usually listens on port
8020 for file system metadata operations so I guess you specified a
different port in the fs.default.parameter hdfs-site.xml?

Vijay

-----Original Message-----
From: Keith Wiley [mailto:kwiley@keithwiley.com] 
Sent: 19 February 2013 15:10
To: user@hadoop.apache.org
Subject: Re: Namenode formatting problem

Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with
Hadoop 2.0 MR1 (which I think should behave almost exactly like older
versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
journal nodes?  I'll try to read up more on this later today.  Thanks for
the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need
to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode 
> join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java
:375)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350
)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcS
erver.java:238)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.jav
a:452)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434
)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1140)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:120
> 4)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> 2013-02-18 22:19:46,990 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the
namenode to start any processes, only starting the namenode or datanode
should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
> connection exception: java.net.ConnectException: Connection refused; 
> For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the
security group has full inbound access.  I can ssh between all three
machines (client/master/slave) without a password ala authorized_keys.  I
can ping the master node from the client machine (although I don't know how
to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

____________________________________________________________________________
____
Keith Wiley     kwiley@keithwiley.com     keithwiley.com
music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio
than when I entered."
                                           --  Keith Wiley
____________________________________________________________________________
____



RE: Namenode formatting problem

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Keith,

When you run the format command on the namenode machine it actually starts
the namenode, formats it then shuts it down (see:
http://hadoop.apache.org/docs/stable/commands_manual.html). Before you run
the format command do you see any processes already listening on port 9212
via netstat -anlp | grep 9212 on the namenode? 

As per the recommendations on the link in the error message
(http://wiki.apache.org/hadoop/BindException) you could try changing the
port used by the namenode

I'm not familiar with deploying Hadoop on EC2 so I'm not sure if this is
different for EC2 deployments, however, the namenode usually listens on port
8020 for file system metadata operations so I guess you specified a
different port in the fs.default.parameter hdfs-site.xml?

Vijay

-----Original Message-----
From: Keith Wiley [mailto:kwiley@keithwiley.com] 
Sent: 19 February 2013 15:10
To: user@hadoop.apache.org
Subject: Re: Namenode formatting problem

Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with
Hadoop 2.0 MR1 (which I think should behave almost exactly like older
versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
journal nodes?  I'll try to read up more on this later today.  Thanks for
the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need
to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode 
> join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java
:375)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350
)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcS
erver.java:238)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.jav
a:452)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434
)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1140)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:120
> 4)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> 2013-02-18 22:19:46,990 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the
namenode to start any processes, only starting the namenode or datanode
should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
> connection exception: java.net.ConnectException: Connection refused; 
> For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the
security group has full inbound access.  I can ssh between all three
machines (client/master/slave) without a password ala authorized_keys.  I
can ping the master node from the client machine (although I don't know how
to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

____________________________________________________________________________
____
Keith Wiley     kwiley@keithwiley.com     keithwiley.com
music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio
than when I entered."
                                           --  Keith Wiley
____________________________________________________________________________
____



RE: Namenode formatting problem

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Keith,

When you run the format command on the namenode machine it actually starts
the namenode, formats it then shuts it down (see:
http://hadoop.apache.org/docs/stable/commands_manual.html). Before you run
the format command do you see any processes already listening on port 9212
via netstat -anlp | grep 9212 on the namenode? 

As per the recommendations on the link in the error message
(http://wiki.apache.org/hadoop/BindException) you could try changing the
port used by the namenode

I'm not familiar with deploying Hadoop on EC2 so I'm not sure if this is
different for EC2 deployments, however, the namenode usually listens on port
8020 for file system metadata operations so I guess you specified a
different port in the fs.default.parameter hdfs-site.xml?

Vijay

-----Original Message-----
From: Keith Wiley [mailto:kwiley@keithwiley.com] 
Sent: 19 February 2013 15:10
To: user@hadoop.apache.org
Subject: Re: Namenode formatting problem

Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with
Hadoop 2.0 MR1 (which I think should behave almost exactly like older
versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
journal nodes?  I'll try to read up more on this later today.  Thanks for
the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need
to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode 
> join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java
:375)
>         at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350
)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcS
erver.java:238)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.jav
a:452)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434
)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:1140)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:120
> 4)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1
> 2013-02-18 22:19:46,990 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the
namenode to start any processes, only starting the namenode or datanode
should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on 
> connection exception: java.net.ConnectException: Connection refused; 
> For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the
security group has full inbound access.  I can ssh between all three
machines (client/master/slave) without a password ala authorized_keys.  I
can ping the master node from the client machine (although I don't know how
to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

____________________________________________________________________________
____
Keith Wiley     kwiley@keithwiley.com     keithwiley.com
music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio
than when I entered."
                                           --  Keith Wiley
____________________________________________________________________________
____



Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
I want to update my answer, if you don't configure QJM HA in your
hadoop-2.0.3, then just ignore my reply. Thanks.


On Tue, Feb 19, 2013 at 11:09 PM, Keith Wiley <kw...@keithwiley.com> wrote:

> Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it
> with Hadoop 2.0 MR1 (which I think should behave almost exactly like older
> versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
> journal nodes?  I'll try to read up more on this later today.  Thanks for
> the tip.
>
> On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:
>
> > Because journal nodes are also be formated during NN format, so you need
> to start all JN daemons firstly.
> >
> > On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> > This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
> >
> > 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> > java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
> >         at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> > 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> > 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> > ************************************************************/
> >
> > No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
> >
> > ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >
> > My /etc/hosts looks like this:
> > 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> > MASTER_IP MASTER_HOST master
> > SLAVE_IP SLAVE_HOST slave01
> >
> > This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
> >
> > Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "What I primarily learned in grad school is how much I *don't* know.
> Consequently, I left grad school with a higher ignorance to knowledge
> ratio than
> when I entered."
>                                            --  Keith Wiley
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
I want to update my answer, if you don't configure QJM HA in your
hadoop-2.0.3, then just ignore my reply. Thanks.


On Tue, Feb 19, 2013 at 11:09 PM, Keith Wiley <kw...@keithwiley.com> wrote:

> Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it
> with Hadoop 2.0 MR1 (which I think should behave almost exactly like older
> versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us
> journal nodes?  I'll try to read up more on this later today.  Thanks for
> the tip.
>
> On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:
>
> > Because journal nodes are also be formated during NN format, so you need
> to start all JN daemons firstly.
> >
> > On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> > This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
> >
> > 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> > java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
> >         at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
> >         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> > 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> > 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> > ************************************************************/
> >
> > No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
> >
> > ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> >
> > My /etc/hosts looks like this:
> > 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> > MASTER_IP MASTER_HOST master
> > SLAVE_IP SLAVE_HOST slave01
> >
> > This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
> >
> > Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "What I primarily learned in grad school is how much I *don't* know.
> Consequently, I left grad school with a higher ignorance to knowledge
> ratio than
> when I entered."
>                                            --  Keith Wiley
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Keith Wiley <kw...@keithwiley.com>.
Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with Hadoop 2.0 MR1 (which I think should behave almost exactly like older versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us journal nodes?  I'll try to read up more on this later today.  Thanks for the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio than
when I entered."
                                           --  Keith Wiley
________________________________________________________________________________


Re: Namenode formatting problem

Posted by Keith Wiley <kw...@keithwiley.com>.
Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with Hadoop 2.0 MR1 (which I think should behave almost exactly like older versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us journal nodes?  I'll try to read up more on this later today.  Thanks for the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio than
when I entered."
                                           --  Keith Wiley
________________________________________________________________________________


Re: Namenode formatting problem

Posted by Keith Wiley <kw...@keithwiley.com>.
Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with Hadoop 2.0 MR1 (which I think should behave almost exactly like older versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us journal nodes?  I'll try to read up more on this later today.  Thanks for the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio than
when I entered."
                                           --  Keith Wiley
________________________________________________________________________________


Re: Namenode formatting problem

Posted by Keith Wiley <kw...@keithwiley.com>.
Hmmm, okay.  Thanks.  Umm, is this a Yarn thing because I also tried it with Hadoop 2.0 MR1 (which I think should behave almost exactly like older versions of Hadoop) and it had the exact same problem.  Does H2.0MR1 us journal nodes?  I'll try to read up more on this later today.  Thanks for the tip.

On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:

> Because journal nodes are also be formated during NN format, so you need to start all JN daemons firstly.
> 
> On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
> 
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
> 
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
> 
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
> 
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
> 
> Any ideas?

________________________________________________________________________________
Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com

"What I primarily learned in grad school is how much I *don't* know.
Consequently, I left grad school with a higher ignorance to knowledge ratio than
when I entered."
                                           --  Keith Wiley
________________________________________________________________________________


Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
Because journal nodes are also be formated during NN format, so you need to
start all JN daemons firstly.
On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:

> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly
> teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
Because journal nodes are also be formated during NN format, so you need to
start all JN daemons firstly.
On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:

> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly
> teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
Hey Keith,

I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
check), is not what is your local IP on that machine (or rather, it
isn't the machine you intended to start it on)?

Not sure if EC2 grants static IPs, but otherwise a change in the
assigned IP (checkable via ifconfig) would probably explain the
"Cannot assign" error received when we tried a bind() syscall.

On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
> ________________________________________________________________________________
>



--
Harsh J

Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
Because journal nodes are also be formated during NN format, so you need to
start all JN daemons firstly.
On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:

> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly
> teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Azuryy Yu <az...@gmail.com>.
Because journal nodes are also be formated during NN format, so you need to
start all JN daemons firstly.
On Feb 19, 2013 7:01 AM, "Keith Wiley" <kw...@keithwiley.com> wrote:

> This is Hadoop 2.0.  Formatting the namenode produces no errors in the
> shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2013-02-18 22:19:46,990 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the
> namenode to start any processes, only starting the namenode or datanode
> should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the
> security group has full inbound access.  I can ssh between all three
> machines (client/master/slave) without a password ala authorized_keys.  I
> can ping the master node from the client machine (although I don't know how
> to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't
> behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com
> music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly
> teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
>
> ________________________________________________________________________________
>
>

Re: Namenode formatting problem

Posted by Harsh J <ha...@cloudera.com>.
Hey Keith,

I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to
check), is not what is your local IP on that machine (or rather, it
isn't the machine you intended to start it on)?

Not sure if EC2 grants static IPs, but otherwise a change in the
assigned IP (checkable via ifconfig) would probably explain the
"Cannot assign" error received when we tried a bind() syscall.

On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley <kw...@keithwiley.com> wrote:
> This is Hadoop 2.0.  Formatting the namenode produces no errors in the shell, but the log shows this:
>
> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:356)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:454)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1833)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:866)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:350)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:238)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:452)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1
> ************************************************************/
>
> No java processes begin (although I wouldn't expect formatting the namenode to start any processes, only starting the namenode or datanode should do that), and "hadoop fs -ls /" gives me this:
>
> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
> My /etc/hosts looks like this:
> 127.0.0.1   localhost localhost.localdomain CLIENT_HOST
> MASTER_IP MASTER_HOST master
> SLAVE_IP SLAVE_HOST slave01
>
> This is on EC2.  All of the nodes are in the same security group and the security group has full inbound access.  I can ssh between all three machines (client/master/slave) without a password ala authorized_keys.  I can ping the master node from the client machine (although I don't know how to ping a specific port, such as the hdfs port (9000)).  Telnet doesn't behave on EC2 which makes port testing a little difficult.
>
> Any ideas?
>
> ________________________________________________________________________________
> Keith Wiley     kwiley@keithwiley.com     keithwiley.com    music.keithwiley.com
>
> "The easy confidence with which I know another man's religion is folly teaches
> me to suspect that my own is also."
>                                            --  Mark Twain
> ________________________________________________________________________________
>



--
Harsh J