You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Bhushan Pathak <bh...@gmail.com> on 2017/04/27 09:12:26 UTC

Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml,
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not
start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2017-04-27 14:17:57,176 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
************************************************************/



I have changed the port number multiple times, every time I get the same
error. How do I get past this?



Thanks
Bhushan Pathak

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Donald Nelson <do...@uniscon.de>.
Hello Everyone,

I am planning to upgrade our Hadoop from v 1.0.4 to 2.7.3 together with 
hbase 0.94 to 1.3. Does anyone know of some steps that can help me?

Thanks in advance,

Donald Nelson


On 05/18/2017 12:39 PM, Bhushan Pathak wrote:
> What configuration do you want me to check? Each of the three nodes 
> can access each other via password-less SSH, can ping each other's IP.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar 
> <sidharthkumar2707@gmail.com <ma...@gmail.com>> wrote:
>
>     Hi,
>
>     The error you mentioned below " 'Name or service not known'" means
>     servers not able to communicate to each other. Check network
>     configurations.
>
>     Sidharth
>     Mob: +91 8197555599
>     LinkedIn: www.linkedin.com/in/sidharthkumar2792
>     <http://www.linkedin.com/in/sidharthkumar2792>
>
>     On 17-May-2017 12:13 PM, "Bhushan Pathak"
>     <bhushan.pathak02@gmail.com <ma...@gmail.com>>
>     wrote:
>
>         Apologies for the delayed reply, was away due to some personal
>         issues.
>
>         I tried the telnet command as well, but no luck. I get the
>         response that 'Name or service not known'
>
>         Thanks
>         Bhushan Pathak
>
>         Thanks
>         Bhushan Pathak
>
>         On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar
>         <sidharthkumar2707@gmail.com
>         <ma...@gmail.com>> wrote:
>
>             Can you check if the ports are opened by running telnet
>             command.
>             Run below command from source machine to destination
>             machine and check if this help
>
>             $telnet <IP address> <port number>
>             Ex: $telnet 192.168.1.60 9000
>
>
>             Let's Hadooping....!
>
>             Bests
>             Sidharth
>             Mob: +91 8197555599
>             LinkedIn: www.linkedin.com/in/sidharthkumar2792
>             <http://www.linkedin.com/in/sidharthkumar2792>
>
>             On 28-Apr-2017 10:32 AM, "Bhushan Pathak"
>             <bhushan.pathak02@gmail.com
>             <ma...@gmail.com>> wrote:
>
>                 Hello All,
>
>                 1. The slave & master can ping each other as well as
>                 use passwordless SSH
>                 2. The actual IP starts with 10.x.x.x, I have put in
>                 the config file as I cannot share  the actual IP
>                 3. The namenode is formatted. I executed the 'hdfs
>                 namenode -format' again just to rule out the possibility
>                 4. I did not configure anything in the master file. I
>                 don;t think Hadoop 2.7.3 has a master file to be
>                 configured
>                 5. The netstat command [sudo netstat -tulpn | grep
>                 '51150'] does not give any output.
>
>                 Even if I change  the port number to a different one,
>                 say 52220, 50000, I still get the same error.
>
>                 Thanks
>                 Bhushan Pathak
>
>                 Thanks
>                 Bhushan Pathak
>
>                 On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao
>                 <charlie.chao@hotmail.com
>                 <ma...@hotmail.com>> wrote:
>
>                     Hi Mr. Bhushan,
>
>                     Have you tried to format namenode?
>                     Here's the command:
>                     hdfs namenode -format
>
>                     I've encountered such problem as namenode cannot
>                     be started. This command line easily fixed my problem.
>
>                     Hope this can help you.
>
>                     Sincerely,
>                     Lei Cao
>
>
>                     On Apr 27, 2017, at 12:09, Brahma Reddy Battula
>                     <brahmareddy.battula@huawei.com
>                     <ma...@huawei.com>> wrote:
>
>>                     *Please check “hostname –i” .*
>>
>>                     **
>>
>>                     **
>>
>>                     *1)**What’s configured in the “master” file.(you
>>                     shared only slave file).?*
>>
>>                     **
>>
>>                     *2)**Can you able to “ping master”?*
>>
>>                     **
>>
>>                     *3)**Can you configure like this check once..?*
>>
>>                     *                1.1.1.1 master*
>>
>>                     Regards
>>
>>                     Brahma Reddy Battula
>>
>>                     *From:*Bhushan Pathak
>>                     [mailto:bhushan.pathak02@gmail.com
>>                     <ma...@gmail.com>]
>>                     *Sent:* 27 April 2017 18:16
>>                     *To:* Brahma Reddy Battula
>>                     *Cc:* user@hadoop.apache.org
>>                     <ma...@hadoop.apache.org>
>>                     *Subject:* Re: Hadoop 2.7.3 cluster namenode not
>>                     starting
>>
>>                     Some additional info -
>>
>>                     OS: CentOS 7
>>
>>                     RAM: 8GB
>>
>>                     Thanks
>>
>>                     Bhushan Pathak
>>
>>
>>                     Thanks
>>
>>                     Bhushan Pathak
>>
>>                     On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak
>>                     <bhushan.pathak02@gmail.com
>>                     <ma...@gmail.com>> wrote:
>>
>>                         Yes, I'm running the command on the master node.
>>
>>                         Attached are the config files & the hosts
>>                         file. I have updated the IP address only as
>>                         per company policy, so that original IP
>>                         addresses are not shared.
>>
>>                         The same config files & hosts file exist on
>>                         all 3 nodes.
>>
>>                         Thanks
>>
>>                         Bhushan Pathak
>>
>>
>>                         Thanks
>>
>>                         Bhushan Pathak
>>
>>                         On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy
>>                         Battula <brahmareddy.battula@huawei.com
>>                         <ma...@huawei.com>> wrote:
>>
>>                             Are you sure that you are starting in
>>                             same machine (master)..?
>>
>>                             Please share “/etc/hosts” and
>>                             configuration files..
>>
>>                             Regards
>>
>>                             Brahma Reddy Battula
>>
>>                             *From:*Bhushan Pathak
>>                             [mailto:bhushan.pathak02@gmail.com
>>                             <ma...@gmail.com>]
>>                             *Sent:* 27 April 2017 17:18
>>                             *To:* user@hadoop.apache.org
>>                             <ma...@hadoop.apache.org>
>>                             *Subject:* Fwd: Hadoop 2.7.3 cluster
>>                             namenode not starting
>>
>>                             Hello
>>
>>                             I have a 3-node cluster where I have
>>                             installed hadoop 2.7.3. I have updated
>>                             core-site.xml, mapred-site.xml, slaves,
>>                             hdfs-site.xml, yarn-site.xml,
>>                             hadoop-env.sh files with basic settings
>>                             on all 3 nodes.
>>
>>                             When I execute start-dfs.sh on the master
>>                             node, the namenode does not start. The
>>                             logs contain the following error -
>>
>>                             2017-04-27 14:17:57,166 ERROR
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode:
>>                             Failed to start namenode.
>>
>>                             java.net.BindException: Problem binding
>>                             to [master:51150] java.net.BindException:
>>                             Cannot assign requested address; For more
>>                             details see:
>>                             http://wiki.apache.org/hadoop/BindException
>>                             <http://wiki.apache.org/hadoop/BindException>
>>
>>                                   at
>>                             sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>                             Method)
>>
>>                                   at
>>                             sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>
>>                                   at
>>                             sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>
>>                                   at
>>                             java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>>
>>                                   at org.apache.hadoop.net
>>                             <http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
>>
>>                                   at org.apache.hadoop.net
>>                             <http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
>>
>>                                   at
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
>>
>>                             Caused by: java.net.BindException: Cannot
>>                             assign requested address
>>
>>                                   at sun.nio.ch.Net.bind0(Native Method)
>>
>>                                   at sun.nio.ch.Net.bind(Net.java:433)
>>
>>                                   at sun.nio.ch.Net.bind(Net.java:425)
>>
>>                                   at sun.nio.ch
>>                             <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>>
>>                                   at sun.nio.ch
>>                             <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>>
>>                                   at
>>                             org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>>                                   ... 13 more
>>
>>                             2017-04-27 14:17:57,171 INFO
>>                             org.apache.hadoop.util.ExitUtil: Exiting
>>                             with status 1
>>
>>                             2017-04-27 14:17:57,176 INFO
>>                             org.apache.hadoop.hdfs.server.namenode.NameNode:
>>                             SHUTDOWN_MSG:
>>
>>                             /************************************************************
>>
>>                             SHUTDOWN_MSG: Shutting down NameNode at
>>                             master/1.1.1.1 <http://1.1.1.1>
>>
>>                             ************************************************************/
>>
>>                             I have changed the port number multiple
>>                             times, every time I get the same error.
>>                             How do I get past this?
>>
>>                             Thanks
>>
>>                             Bhushan Pathak
>>
>
>
>
>


Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Bhushan Pathak <bh...@gmail.com>.
What configuration do you want me to check? Each of the three nodes can
access each other via password-less SSH, can ping each other's IP.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar <
sidharthkumar2707@gmail.com> wrote:

> Hi,
>
> The error you mentioned below " 'Name or service not known'" means
> servers not able to communicate to each other. Check network configurations.
>
> Sidharth
> Mob: +91 8197555599
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 17-May-2017 12:13 PM, "Bhushan Pathak" <bh...@gmail.com>
> wrote:
>
> Apologies for the delayed reply, was away due to some personal issues.
>
> I tried the telnet command as well, but no luck. I get the response that
> 'Name or service not known'
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <
> sidharthkumar2707@gmail.com> wrote:
>
>> Can you check if the ports are opened by running telnet command.
>> Run below command from source machine to destination machine and check if
>> this help
>>
>> $telnet <IP address> <port number>
>> Ex: $telnet 192.168.1.60 9000
>>
>>
>> Let's Hadooping....!
>>
>> Bests
>> Sidharth
>> Mob: +91 8197555599
>> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>>
>> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <bh...@gmail.com>
>> wrote:
>>
>>> Hello All,
>>>
>>> 1. The slave & master can ping each other as well as use passwordless SSH
>>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as
>>> I cannot share  the actual IP
>>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>>> again just to rule out the possibility
>>> 4. I did not configure anything in the master file. I don;t think Hadoop
>>> 2.7.3 has a master file to be configured
>>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>>> give any output.
>>>
>>> Even if I change  the port number to a different one, say 52220, 50000,
>>> I still get the same error.
>>>
>>> Thanks
>>> Bhushan Pathak
>>>
>>> Thanks
>>> Bhushan Pathak
>>>
>>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <ch...@hotmail.com>
>>> wrote:
>>>
>>>> Hi Mr. Bhushan,
>>>>
>>>> Have you tried to format namenode?
>>>> Here's the command:
>>>> hdfs namenode -format
>>>>
>>>> I've encountered such problem as namenode cannot be started. This
>>>> command line easily fixed my problem.
>>>>
>>>> Hope this can help you.
>>>>
>>>> Sincerely,
>>>> Lei Cao
>>>>
>>>>
>>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>>> brahmareddy.battula@huawei.com> wrote:
>>>>
>>>> *Please check “hostname –i” .*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *1)      **What’s configured in the “master” file.(you shared only
>>>> slave file).?*
>>>>
>>>>
>>>>
>>>> *2)      **Can you able to “ping master”?*
>>>>
>>>>
>>>>
>>>> *3)      **Can you configure like this check once..?*
>>>>
>>>> *                1.1.1.1 master*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Regards
>>>>
>>>> Brahma Reddy Battula
>>>>
>>>>
>>>>
>>>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com
>>>> <bh...@gmail.com>]
>>>> *Sent:* 27 April 2017 18:16
>>>> *To:* Brahma Reddy Battula
>>>> *Cc:* user@hadoop.apache.org
>>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>>
>>>>
>>>>
>>>> Some additional info -
>>>>
>>>> OS: CentOS 7
>>>>
>>>> RAM: 8GB
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>>
>>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>>> bhushan.pathak02@gmail.com> wrote:
>>>>
>>>> Yes, I'm running the command on the master node.
>>>>
>>>>
>>>>
>>>> Attached are the config files & the hosts file. I have updated the IP
>>>> address only as per company policy, so that original IP addresses are not
>>>> shared.
>>>>
>>>>
>>>>
>>>> The same config files & hosts file exist on all 3 nodes.
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>>
>>>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>>>> brahmareddy.battula@huawei.com> wrote:
>>>>
>>>> Are you sure that you are starting in same machine (master)..?
>>>>
>>>>
>>>>
>>>> Please share “/etc/hosts” and configuration files..
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Regards
>>>>
>>>> Brahma Reddy Battula
>>>>
>>>>
>>>>
>>>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
>>>> *Sent:* 27 April 2017 17:18
>>>> *To:* user@hadoop.apache.org
>>>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>>>
>>>>
>>>>
>>>> Hello
>>>>
>>>>
>>>>
>>>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>>>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>>>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>>>
>>>>
>>>>
>>>> When I execute start-dfs.sh on the master node, the namenode does not
>>>> start. The logs contain the following error -
>>>>
>>>> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>>>> Failed to start namenode.
>>>>
>>>> java.net.BindException: Problem binding to [master:51150]
>>>> java.net.BindException: Cannot assign requested address; For more details
>>>> see:  http://wiki.apache.org/hadoop/BindException
>>>>
>>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>> Method)
>>>>
>>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>>>> ConstructorAccessorImpl.java:62)
>>>>
>>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>>>> legatingConstructorAccessorImpl.java:45)
>>>>
>>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>>>> 23)
>>>>
>>>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java
>>>> :792)
>>>>
>>>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:7
>>>> 21)
>>>>
>>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574
>>>> )
>>>>
>>>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>>>
>>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(Protob
>>>> ufRpcEngine.java:534)
>>>>
>>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>>>> cEngine.java:509)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>>>> it>(NameNodeRpcServer.java:345)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>>>> ver(NameNode.java:674)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>>>> ameNode.java:647)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>>> ode.java:812)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>>> ode.java:796)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>>>> de(NameNode.java:1493)
>>>>
>>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>>>> e.java:1559)
>>>>
>>>> Caused by: java.net.BindException: Cannot assign requested address
>>>>
>>>>         at sun.nio.ch.Net.bind0(Native Method)
>>>>
>>>>         at sun.nio.ch.Net.bind(Net.java:433)
>>>>
>>>>         at sun.nio.ch.Net.bind(Net.java:425)
>>>>
>>>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>>>> mpl.java:223)
>>>>
>>>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java
>>>> :74)
>>>>
>>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>>>
>>>>         ... 13 more
>>>>
>>>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>>>> with status 1
>>>>
>>>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>>>> SHUTDOWN_MSG:
>>>>
>>>> /************************************************************
>>>>
>>>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>>>
>>>> ************************************************************/
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> I have changed the port number multiple times, every time I get the
>>>> same error. How do I get past this?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Bhushan Pathak
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>
>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Sidharth Kumar <si...@gmail.com>.
Hi,

The error you mentioned below " 'Name or service not known'" means servers
not able to communicate to each other. Check network configurations.

Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 17-May-2017 12:13 PM, "Bhushan Pathak" <bh...@gmail.com>
wrote:

Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that
'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <si...@gmail.com>
wrote:

> Can you check if the ports are opened by running telnet command.
> Run below command from source machine to destination machine and check if
> this help
>
> $telnet <IP address> <port number>
> Ex: $telnet 192.168.1.60 9000
>
>
> Let's Hadooping....!
>
> Bests
> Sidharth
> Mob: +91 8197555599
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <bh...@gmail.com>
> wrote:
>
>> Hello All,
>>
>> 1. The slave & master can ping each other as well as use passwordless SSH
>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
>> cannot share  the actual IP
>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>> again just to rule out the possibility
>> 4. I did not configure anything in the master file. I don;t think Hadoop
>> 2.7.3 has a master file to be configured
>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>> give any output.
>>
>> Even if I change  the port number to a different one, say 52220, 50000, I
>> still get the same error.
>>
>> Thanks
>> Bhushan Pathak
>>
>> Thanks
>> Bhushan Pathak
>>
>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <ch...@hotmail.com>
>> wrote:
>>
>>> Hi Mr. Bhushan,
>>>
>>> Have you tried to format namenode?
>>> Here's the command:
>>> hdfs namenode -format
>>>
>>> I've encountered such problem as namenode cannot be started. This
>>> command line easily fixed my problem.
>>>
>>> Hope this can help you.
>>>
>>> Sincerely,
>>> Lei Cao
>>>
>>>
>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>> brahmareddy.battula@huawei.com> wrote:
>>>
>>> *Please check “hostname –i” .*
>>>
>>>
>>>
>>>
>>>
>>> *1)      **What’s configured in the “master” file.(you shared only
>>> slave file).?*
>>>
>>>
>>>
>>> *2)      **Can you able to “ping master”?*
>>>
>>>
>>>
>>> *3)      **Can you configure like this check once..?*
>>>
>>> *                1.1.1.1 master*
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com
>>> <bh...@gmail.com>]
>>> *Sent:* 27 April 2017 18:16
>>> *To:* Brahma Reddy Battula
>>> *Cc:* user@hadoop.apache.org
>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Some additional info -
>>>
>>> OS: CentOS 7
>>>
>>> RAM: 8GB
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>> bhushan.pathak02@gmail.com> wrote:
>>>
>>> Yes, I'm running the command on the master node.
>>>
>>>
>>>
>>> Attached are the config files & the hosts file. I have updated the IP
>>> address only as per company policy, so that original IP addresses are not
>>> shared.
>>>
>>>
>>>
>>> The same config files & hosts file exist on all 3 nodes.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>>> brahmareddy.battula@huawei.com> wrote:
>>>
>>> Are you sure that you are starting in same machine (master)..?
>>>
>>>
>>>
>>> Please share “/etc/hosts” and configuration files..
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
>>> *Sent:* 27 April 2017 17:18
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Hello
>>>
>>>
>>>
>>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>>
>>>
>>>
>>> When I execute start-dfs.sh on the master node, the namenode does not
>>> start. The logs contain the following error -
>>>
>>> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> Failed to start namenode.
>>>
>>> java.net.BindException: Problem binding to [master:51150]
>>> java.net.BindException: Cannot assign requested address; For more details
>>> see:  http://wiki.apache.org/hadoop/BindException
>>>
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>>> ConstructorAccessorImpl.java:62)
>>>
>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>>> legatingConstructorAccessorImpl.java:45)
>>>
>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>>> 23)
>>>
>>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java
>>> :792)
>>>
>>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:7
>>> 21)
>>>
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>>
>>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>>
>>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>>
>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(Protob
>>> ufRpcEngine.java:534)
>>>
>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>>> cEngine.java:509)
>>>
>>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>>> it>(NameNodeRpcServer.java:345)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>>> ver(NameNode.java:674)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>>> ameNode.java:647)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>> ode.java:812)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>> ode.java:796)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>>> de(NameNode.java:1493)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>>> e.java:1559)
>>>
>>> Caused by: java.net.BindException: Cannot assign requested address
>>>
>>>         at sun.nio.ch.Net.bind0(Native Method)
>>>
>>>         at sun.nio.ch.Net.bind(Net.java:433)
>>>
>>>         at sun.nio.ch.Net.bind(Net.java:425)
>>>
>>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>>> mpl.java:223)
>>>
>>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java
>>> :74)
>>>
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>>
>>>         ... 13 more
>>>
>>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>>> with status 1
>>>
>>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> SHUTDOWN_MSG:
>>>
>>> /************************************************************
>>>
>>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>>
>>> ************************************************************/
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> I have changed the port number multiple times, every time I get the same
>>> error. How do I get past this?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Bhushan Pathak <bh...@gmail.com>.
Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that
'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <si...@gmail.com>
wrote:

> Can you check if the ports are opened by running telnet command.
> Run below command from source machine to destination machine and check if
> this help
>
> $telnet <IP address> <port number>
> Ex: $telnet 192.168.1.60 9000
>
>
> Let's Hadooping....!
>
> Bests
> Sidharth
> Mob: +91 8197555599
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <bh...@gmail.com>
> wrote:
>
>> Hello All,
>>
>> 1. The slave & master can ping each other as well as use passwordless SSH
>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
>> cannot share  the actual IP
>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>> again just to rule out the possibility
>> 4. I did not configure anything in the master file. I don;t think Hadoop
>> 2.7.3 has a master file to be configured
>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>> give any output.
>>
>> Even if I change  the port number to a different one, say 52220, 50000, I
>> still get the same error.
>>
>> Thanks
>> Bhushan Pathak
>>
>> Thanks
>> Bhushan Pathak
>>
>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <ch...@hotmail.com>
>> wrote:
>>
>>> Hi Mr. Bhushan,
>>>
>>> Have you tried to format namenode?
>>> Here's the command:
>>> hdfs namenode -format
>>>
>>> I've encountered such problem as namenode cannot be started. This
>>> command line easily fixed my problem.
>>>
>>> Hope this can help you.
>>>
>>> Sincerely,
>>> Lei Cao
>>>
>>>
>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>> brahmareddy.battula@huawei.com> wrote:
>>>
>>> *Please check “hostname –i” .*
>>>
>>>
>>>
>>>
>>>
>>> *1)      **What’s configured in the “master” file.(you shared only
>>> slave file).?*
>>>
>>>
>>>
>>> *2)      **Can you able to “ping master”?*
>>>
>>>
>>>
>>> *3)      **Can you configure like this check once..?*
>>>
>>> *                1.1.1.1 master*
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com
>>> <bh...@gmail.com>]
>>> *Sent:* 27 April 2017 18:16
>>> *To:* Brahma Reddy Battula
>>> *Cc:* user@hadoop.apache.org
>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Some additional info -
>>>
>>> OS: CentOS 7
>>>
>>> RAM: 8GB
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>> bhushan.pathak02@gmail.com> wrote:
>>>
>>> Yes, I'm running the command on the master node.
>>>
>>>
>>>
>>> Attached are the config files & the hosts file. I have updated the IP
>>> address only as per company policy, so that original IP addresses are not
>>> shared.
>>>
>>>
>>>
>>> The same config files & hosts file exist on all 3 nodes.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>>> brahmareddy.battula@huawei.com> wrote:
>>>
>>> Are you sure that you are starting in same machine (master)..?
>>>
>>>
>>>
>>> Please share “/etc/hosts” and configuration files..
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
>>> *Sent:* 27 April 2017 17:18
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Hello
>>>
>>>
>>>
>>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>>
>>>
>>>
>>> When I execute start-dfs.sh on the master node, the namenode does not
>>> start. The logs contain the following error -
>>>
>>> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> Failed to start namenode.
>>>
>>> java.net.BindException: Problem binding to [master:51150]
>>> java.net.BindException: Cannot assign requested address; For more details
>>> see:  http://wiki.apache.org/hadoop/BindException
>>>
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>>> ConstructorAccessorImpl.java:62)
>>>
>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>>> legatingConstructorAccessorImpl.java:45)
>>>
>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>>> 23)
>>>
>>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java
>>> :792)
>>>
>>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:7
>>> 21)
>>>
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>>
>>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>>
>>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>>
>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(Protob
>>> ufRpcEngine.java:534)
>>>
>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>>> cEngine.java:509)
>>>
>>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>>> it>(NameNodeRpcServer.java:345)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>>> ver(NameNode.java:674)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>>> ameNode.java:647)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>> ode.java:812)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>> ode.java:796)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>>> de(NameNode.java:1493)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>>> e.java:1559)
>>>
>>> Caused by: java.net.BindException: Cannot assign requested address
>>>
>>>         at sun.nio.ch.Net.bind0(Native Method)
>>>
>>>         at sun.nio.ch.Net.bind(Net.java:433)
>>>
>>>         at sun.nio.ch.Net.bind(Net.java:425)
>>>
>>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>>> mpl.java:223)
>>>
>>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java
>>> :74)
>>>
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>>
>>>         ... 13 more
>>>
>>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>>> with status 1
>>>
>>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> SHUTDOWN_MSG:
>>>
>>> /************************************************************
>>>
>>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>>
>>> ************************************************************/
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> I have changed the port number multiple times, every time I get the same
>>> error. How do I get past this?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Sidharth Kumar <si...@gmail.com>.
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if
this help

$telnet <IP address> <port number>
Ex: $telnet 192.168.1.60 9000


Let's Hadooping....!

Bests
Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <bh...@gmail.com>
wrote:

> Hello All,
>
> 1. The slave & master can ping each other as well as use passwordless SSH
> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
> cannot share  the actual IP
> 3. The namenode is formatted. I executed the 'hdfs namenode -format' again
> just to rule out the possibility
> 4. I did not configure anything in the master file. I don;t think Hadoop
> 2.7.3 has a master file to be configured
> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
> give any output.
>
> Even if I change  the port number to a different one, say 52220, 50000, I
> still get the same error.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <ch...@hotmail.com> wrote:
>
>> Hi Mr. Bhushan,
>>
>> Have you tried to format namenode?
>> Here's the command:
>> hdfs namenode -format
>>
>> I've encountered such problem as namenode cannot be started. This command
>> line easily fixed my problem.
>>
>> Hope this can help you.
>>
>> Sincerely,
>> Lei Cao
>>
>>
>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>> brahmareddy.battula@huawei.com> wrote:
>>
>> *Please check “hostname –i” .*
>>
>>
>>
>>
>>
>> *1)      **What’s configured in the “master” file.(you shared only slave
>> file).?*
>>
>>
>>
>> *2)      **Can you able to “ping master”?*
>>
>>
>>
>> *3)      **Can you configure like this check once..?*
>>
>> *                1.1.1.1 master*
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com
>> <bh...@gmail.com>]
>> *Sent:* 27 April 2017 18:16
>> *To:* Brahma Reddy Battula
>> *Cc:* user@hadoop.apache.org
>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Some additional info -
>>
>> OS: CentOS 7
>>
>> RAM: 8GB
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>> bhushan.pathak02@gmail.com> wrote:
>>
>> Yes, I'm running the command on the master node.
>>
>>
>>
>> Attached are the config files & the hosts file. I have updated the IP
>> address only as per company policy, so that original IP addresses are not
>> shared.
>>
>>
>>
>> The same config files & hosts file exist on all 3 nodes.
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>> brahmareddy.battula@huawei.com> wrote:
>>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(
>> ProtobufRpcEngine.java:534)
>>
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>> it>(NameNodeRpcServer.java:345)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>> ode.java:812)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>> ode.java:796)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>>         at sun.nio.ch.Net.bind0(Native Method)
>>
>>         at sun.nio.ch.Net.bind(Net.java:433)
>>
>>         at sun.nio.ch.Net.bind(Net.java:425)
>>
>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>>         ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /************************************************************
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> ************************************************************/
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Bhushan Pathak <bh...@gmail.com>.
Hello All,

1. The slave & master can ping each other as well as use passwordless SSH
2. The actual IP starts with 10.x.x.x, I have put in the config file as I
cannot share  the actual IP
3. The namenode is formatted. I executed the 'hdfs namenode -format' again
just to rule out the possibility
4. I did not configure anything in the master file. I don;t think Hadoop
2.7.3 has a master file to be configured
5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not give
any output.

Even if I change  the port number to a different one, say 52220, 50000, I
still get the same error.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <ch...@hotmail.com> wrote:

> Hi Mr. Bhushan,
>
> Have you tried to format namenode?
> Here's the command:
> hdfs namenode -format
>
> I've encountered such problem as namenode cannot be started. This command
> line easily fixed my problem.
>
> Hope this can help you.
>
> Sincerely,
> Lei Cao
>
>
> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
> *Please check “hostname –i” .*
>
>
>
>
>
> *1)      **What’s configured in the “master” file.(you shared only slave
> file).?*
>
>
>
> *2)      **Can you able to “ping master”?*
>
>
>
> *3)      **Can you configure like this check once..?*
>
> *                1.1.1.1 master*
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com
> <bh...@gmail.com>]
> *Sent:* 27 April 2017 18:16
> *To:* Brahma Reddy Battula
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Some additional info -
>
> OS: CentOS 7
>
> RAM: 8GB
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
> bhushan.pathak02@gmail.com> wrote:
>
> Yes, I'm running the command on the master node.
>
>
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
>
>
> The same config files & hosts file exist on all 3 nodes.
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:812)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
>
> Caused by: java.net.BindException: Cannot assign requested address
>
>         at sun.nio.ch.Net.bind0(Native Method)
>
>         at sun.nio.ch.Net.bind(Net.java:433)
>
>         at sun.nio.ch.Net.bind(Net.java:425)
>
>         at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>
>         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>
>         ... 13 more
>
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
>
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
>
> /************************************************************
>
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>
> ************************************************************/
>
>
>
>
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
>
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
>
>
>
>
>
>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Lei Cao <ch...@hotmail.com>.
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <br...@huawei.com>> wrote:

Please check “hostname –i” .



1)      What’s configured in the “master” file.(you shared only slave file).?


2)      Can you able to “ping master”?



3)      Can you configure like this check once..?
                1.1.1.1 master


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <bh...@gmail.com>> wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <br...@huawei.com>> wrote:
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com<ma...@gmail.com>]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1<http://1.1.1.1>
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get past this?



Thanks
Bhushan Pathak




RE: Hadoop 2.7.3 cluster namenode not starting

Posted by Brahma Reddy Battula <br...@huawei.com>.
Please check “hostname –i” .



1)      What’s configured in the “master” file.(you shared only slave file).?


2)      Can you able to “ping master”?



3)      Can you configure like this check once..?
                1.1.1.1 master


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: user@hadoop.apache.org
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <bh...@gmail.com>> wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address only as per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <br...@huawei.com>> wrote:
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com<ma...@gmail.com>]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1<http://1.1.1.1>
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get past this?



Thanks
Bhushan Pathak




Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Bhushan Pathak <bh...@gmail.com>.
Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <bh...@gmail.com>
wrote:

> Yes, I'm running the command on the master node.
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
> The same config files & hosts file exist on all 3 nodes.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(
>> ProtobufRpcEngine.java:534)
>>
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>> it>(NameNodeRpcServer.java:345)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>> ode.java:812)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>> ode.java:796)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>>         at sun.nio.ch.Net.bind0(Native Method)
>>
>>         at sun.nio.ch.Net.bind(Net.java:433)
>>
>>         at sun.nio.ch.Net.bind(Net.java:425)
>>
>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>>         ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /************************************************************
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> ************************************************************/
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>
>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Hilmi Egemen Ciritoğlu <hi...@gmail.com>.
Can you check is port(51150) in use from other process:

sudo netstat -tulpn | grep '51150'

Regards,
Egemen

2017-04-27 11:04 GMT+01:00 Bhushan Pathak <bh...@gmail.com>:

> Yes, I'm running the command on the master node.
>
> Attached are the config files & the hosts file. I have updated the IP
> address only as per company policy, so that original IP addresses are not
> shared.
>
> The same config files & hosts file exist on all 3 nodes.
>
> Thanks
> Bhushan Pathak
>
> Thanks
> Bhushan Pathak
>
> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
> brahmareddy.battula@huawei.com> wrote:
>
>> Are you sure that you are starting in same machine (master)..?
>>
>>
>>
>> Please share “/etc/hosts” and configuration files..
>>
>>
>>
>>
>>
>> Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
>> *Sent:* 27 April 2017 17:18
>> *To:* user@hadoop.apache.org
>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>
>>
>>
>> Hello
>>
>>
>>
>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>
>>
>>
>> When I execute start-dfs.sh on the master node, the namenode does not
>> start. The logs contain the following error -
>>
>> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
>> Failed to start namenode.
>>
>> java.net.BindException: Problem binding to [master:51150]
>> java.net.BindException: Cannot assign requested address; For more details
>> see:  http://wiki.apache.org/hadoop/BindException
>>
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>> ConstructorAccessorImpl.java:62)
>>
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>> legatingConstructorAccessorImpl.java:45)
>>
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>> 23)
>>
>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.
>> java:792)
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:
>> 721)
>>
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(
>> ProtobufRpcEngine.java:534)
>>
>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>> cEngine.java:509)
>>
>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>> it>(NameNodeRpcServer.java:345)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>> ver(NameNode.java:674)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>> ameNode.java:647)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>> ode.java:812)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>> ode.java:796)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>> de(NameNode.java:1493)
>>
>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>> e.java:1559)
>>
>> Caused by: java.net.BindException: Cannot assign requested address
>>
>>         at sun.nio.ch.Net.bind0(Native Method)
>>
>>         at sun.nio.ch.Net.bind(Net.java:433)
>>
>>         at sun.nio.ch.Net.bind(Net.java:425)
>>
>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>> mpl.java:223)
>>
>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.
>> java:74)
>>
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>
>>         ... 13 more
>>
>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>>
>> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> SHUTDOWN_MSG:
>>
>> /************************************************************
>>
>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>
>> ************************************************************/
>>
>>
>>
>>
>>
>>
>>
>> I have changed the port number multiple times, every time I get the same
>> error. How do I get past this?
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Bhushan Pathak
>>
>>
>>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Bhushan Pathak <bh...@gmail.com>.
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP
address only as per company policy, so that original IP addresses are not
shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:812)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
>
> Caused by: java.net.BindException: Cannot assign requested address
>
>         at sun.nio.ch.Net.bind0(Native Method)
>
>         at sun.nio.ch.Net.bind(Net.java:433)
>
>         at sun.nio.ch.Net.bind(Net.java:425)
>
>         at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>
>         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>
>         ... 13 more
>
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
>
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
>
> /************************************************************
>
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>
> ************************************************************/
>
>
>
>
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
>
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
>

RE: Hadoop 2.7.3 cluster namenode not starting

Posted by Brahma Reddy Battula <br...@huawei.com>.
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1<http://1.1.1.1>
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get past this?



Thanks
Bhushan Pathak


Fwd: Hadoop 2.7.3 cluster namenode not starting

Posted by Bhushan Pathak <bh...@gmail.com>.
Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated
core-site.xml, mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml,
hadoop-env.sh files with basic settings on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not
start. The logs contain the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
Failed to start namenode.
java.net.BindException: Problem binding to [master:51150]
java.net.BindException: Cannot assign requested address; For more details
see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(
NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.
createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.
createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(
ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
************************************************************/



I have changed the port number multiple times, every time I get the same
error. How do I get past this?



Thanks
Bhushan Pathak

Re: Hadoop 2.7.3 cluster namenode not starting

Posted by Vinayakumar B <vi...@apache.org>.
I think you might need to change the IP itself.

Try something similar to 192.168.1.20

-Vinay

On 27 Apr 2017 8:20 pm, "Bhushan Pathak" <bh...@gmail.com> wrote:

> Hello
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:812)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:796)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
> Caused by: java.net.BindException: Cannot assign requested address
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:433)
>         at sun.nio.ch.Net.bind(Net.java:425)
>         at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>         ... 13 more
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
> ************************************************************/
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
> Thanks
> Bhushan Pathak
>