You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Rajesh Thallam <ra...@gmail.com> on 2015/02/10 19:12:14 UTC

Failed to start datanode due to bind exception

I have been repeatedly trying to start datanode but it fails with bind
exception saying address is already in use even though port is free

I used below commands to check

netstat -a -t --numeric-ports -p | grep 500



I have overridden default port 50070 with 50081 but the issue still
persists.

Starting DataNode with maxLockedMemory = 0
Opened streaming server at /172.19.7.160:50081
Balancing bandwith is 10485760 bytes/s
Number threads for balancing is 5
Waiting for threadgroup to exit, active threads is 0
Shutdown complete.
Exception in secureMain
java.net.BindException: bind(2) error: Address already in use when trying
to bind to '/var/run/hdfs-sockets/datanode'
    at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
    at
org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
    at
org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
Exiting with status 1
hdfs-site.xml

  <property>
    <name>dfs.datanode.address</name>
    <value>hostname.dc.xx.org:50010</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>hostname.dc.xx.org:50020</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>hostname.dc.xx.org:50075</value>
  </property>

Regards,
RT

RE: Failed to start datanode due to bind exception

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Rajesh


I think, you might have configured "dfs.domain.socket.path" as /var/run/hdfs-sockets/datanode

Actually ,This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode.

Ideally if some port present only , you will get that error.please re-checkonce..

if you delete "/var/run/hdfs-sockets/datanode" (worst condition, if it is corrupted) and start the datanode.




Thanks & Regards

 Brahma Reddy Battula




________________________________
From: Rajesh Thallam [rajesh.thallam@gmail.com]
Sent: Wednesday, February 11, 2015 12:09 AM
To: user@hadoop.apache.org
Subject: Re: Failed to start datanode due to bind exception

There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com>> wrote:
The exception came from DomainSocket so using netstat wouldn't reveal the conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>> wrote:

I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free

I used below commands to check

netstat -a -t --numeric-ports -p | grep 500



I have overridden default port 50070 with 50081 but the issue still persists.

Starting DataNode with maxLockedMemory = 0
Opened streaming server at /172.19.7.160:50081<http://172.19.7.160:50081>
Balancing bandwith is 10485760 bytes/s
Number threads for balancing is 5
Waiting for threadgroup to exit, active threads is 0
Shutdown complete.
Exception in secureMain
java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
    at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
    at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
    at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
Exiting with status 1

hdfs-site.xml

  <property>
    <name>dfs.datanode.address</name>
    <value>hostname.dc.xx.org:50010<http://hostname.dc.xx.org:50010></value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>hostname.dc.xx.org:50020<http://hostname.dc.xx.org:50020></value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>hostname.dc.xx.org:50075<http://hostname.dc.xx.org:50075></value>
  </property>

Regards,
RT




--
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
/var/run/hdfs-sockets has to be the right permissions. Per default 755 hdfs:hdfs

BR,
 Alexander 

> On 10 Feb 2015, at 19:39, Rajesh Thallam <ra...@gmail.com> wrote:
> 
> There are no contents in the hdfs-sockets directory 
> Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)
> 
> On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yuzhihong@gmail.com <ma...@gmail.com>> wrote:
> The exception came from DomainSocket so using netstat wouldn't reveal the conflict.
> 
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
> 
> Which hadoop release are you using ?
> 
> Cheers
> 
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com <ma...@gmail.com>> wrote:
> I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free
> 
> I used below commands to check
> 
> netstat -a -t --numeric-ports -p | grep 500
> 
>  
> I have overridden default port 50070 with 50081 but the issue still persists.
> 
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081 <http://172.19.7.160:50081/>
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> 
> hdfs-site.xml
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010 <http://hostname.dc.xx.org:50010/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020 <http://hostname.dc.xx.org:50020/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075 <http://hostname.dc.xx.org:50075/></value>
>   </property>
> Regards,
> RT
> 
> 
> 
> 
> -- 
> Cheers,
> RT


RE: Failed to start datanode due to bind exception

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Rajesh


I think, you might have configured "dfs.domain.socket.path" as /var/run/hdfs-sockets/datanode

Actually ,This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode.

Ideally if some port present only , you will get that error.please re-checkonce..

if you delete "/var/run/hdfs-sockets/datanode" (worst condition, if it is corrupted) and start the datanode.




Thanks & Regards

 Brahma Reddy Battula




________________________________
From: Rajesh Thallam [rajesh.thallam@gmail.com]
Sent: Wednesday, February 11, 2015 12:09 AM
To: user@hadoop.apache.org
Subject: Re: Failed to start datanode due to bind exception

There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com>> wrote:
The exception came from DomainSocket so using netstat wouldn't reveal the conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>> wrote:

I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free

I used below commands to check

netstat -a -t --numeric-ports -p | grep 500



I have overridden default port 50070 with 50081 but the issue still persists.

Starting DataNode with maxLockedMemory = 0
Opened streaming server at /172.19.7.160:50081<http://172.19.7.160:50081>
Balancing bandwith is 10485760 bytes/s
Number threads for balancing is 5
Waiting for threadgroup to exit, active threads is 0
Shutdown complete.
Exception in secureMain
java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
    at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
    at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
    at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
Exiting with status 1

hdfs-site.xml

  <property>
    <name>dfs.datanode.address</name>
    <value>hostname.dc.xx.org:50010<http://hostname.dc.xx.org:50010></value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>hostname.dc.xx.org:50020<http://hostname.dc.xx.org:50020></value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>hostname.dc.xx.org:50075<http://hostname.dc.xx.org:50075></value>
  </property>

Regards,
RT




--
Cheers,
RT

RE: Failed to start datanode due to bind exception

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Rajesh


I think, you might have configured "dfs.domain.socket.path" as /var/run/hdfs-sockets/datanode

Actually ,This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode.

Ideally if some port present only , you will get that error.please re-checkonce..

if you delete "/var/run/hdfs-sockets/datanode" (worst condition, if it is corrupted) and start the datanode.




Thanks & Regards

 Brahma Reddy Battula




________________________________
From: Rajesh Thallam [rajesh.thallam@gmail.com]
Sent: Wednesday, February 11, 2015 12:09 AM
To: user@hadoop.apache.org
Subject: Re: Failed to start datanode due to bind exception

There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com>> wrote:
The exception came from DomainSocket so using netstat wouldn't reveal the conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>> wrote:

I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free

I used below commands to check

netstat -a -t --numeric-ports -p | grep 500



I have overridden default port 50070 with 50081 but the issue still persists.

Starting DataNode with maxLockedMemory = 0
Opened streaming server at /172.19.7.160:50081<http://172.19.7.160:50081>
Balancing bandwith is 10485760 bytes/s
Number threads for balancing is 5
Waiting for threadgroup to exit, active threads is 0
Shutdown complete.
Exception in secureMain
java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
    at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
    at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
    at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
Exiting with status 1

hdfs-site.xml

  <property>
    <name>dfs.datanode.address</name>
    <value>hostname.dc.xx.org:50010<http://hostname.dc.xx.org:50010></value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>hostname.dc.xx.org:50020<http://hostname.dc.xx.org:50020></value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>hostname.dc.xx.org:50075<http://hostname.dc.xx.org:50075></value>
  </property>

Regards,
RT




--
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
/var/run/hdfs-sockets has to be the right permissions. Per default 755 hdfs:hdfs

BR,
 Alexander 

> On 10 Feb 2015, at 19:39, Rajesh Thallam <ra...@gmail.com> wrote:
> 
> There are no contents in the hdfs-sockets directory 
> Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)
> 
> On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yuzhihong@gmail.com <ma...@gmail.com>> wrote:
> The exception came from DomainSocket so using netstat wouldn't reveal the conflict.
> 
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
> 
> Which hadoop release are you using ?
> 
> Cheers
> 
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com <ma...@gmail.com>> wrote:
> I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free
> 
> I used below commands to check
> 
> netstat -a -t --numeric-ports -p | grep 500
> 
>  
> I have overridden default port 50070 with 50081 but the issue still persists.
> 
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081 <http://172.19.7.160:50081/>
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> 
> hdfs-site.xml
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010 <http://hostname.dc.xx.org:50010/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020 <http://hostname.dc.xx.org:50020/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075 <http://hostname.dc.xx.org:50075/></value>
>   </property>
> Regards,
> RT
> 
> 
> 
> 
> -- 
> Cheers,
> RT


RE: Failed to start datanode due to bind exception

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Rajesh


I think, you might have configured "dfs.domain.socket.path" as /var/run/hdfs-sockets/datanode

Actually ,This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode.

Ideally if some port present only , you will get that error.please re-checkonce..

if you delete "/var/run/hdfs-sockets/datanode" (worst condition, if it is corrupted) and start the datanode.




Thanks & Regards

 Brahma Reddy Battula




________________________________
From: Rajesh Thallam [rajesh.thallam@gmail.com]
Sent: Wednesday, February 11, 2015 12:09 AM
To: user@hadoop.apache.org
Subject: Re: Failed to start datanode due to bind exception

There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com>> wrote:
The exception came from DomainSocket so using netstat wouldn't reveal the conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>> wrote:

I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free

I used below commands to check

netstat -a -t --numeric-ports -p | grep 500



I have overridden default port 50070 with 50081 but the issue still persists.

Starting DataNode with maxLockedMemory = 0
Opened streaming server at /172.19.7.160:50081<http://172.19.7.160:50081>
Balancing bandwith is 10485760 bytes/s
Number threads for balancing is 5
Waiting for threadgroup to exit, active threads is 0
Shutdown complete.
Exception in secureMain
java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
    at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
    at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
    at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
Exiting with status 1

hdfs-site.xml

  <property>
    <name>dfs.datanode.address</name>
    <value>hostname.dc.xx.org:50010<http://hostname.dc.xx.org:50010></value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>hostname.dc.xx.org:50020<http://hostname.dc.xx.org:50020></value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>hostname.dc.xx.org:50075<http://hostname.dc.xx.org:50075></value>
  </property>

Regards,
RT




--
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
/var/run/hdfs-sockets has to be the right permissions. Per default 755 hdfs:hdfs

BR,
 Alexander 

> On 10 Feb 2015, at 19:39, Rajesh Thallam <ra...@gmail.com> wrote:
> 
> There are no contents in the hdfs-sockets directory 
> Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)
> 
> On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yuzhihong@gmail.com <ma...@gmail.com>> wrote:
> The exception came from DomainSocket so using netstat wouldn't reveal the conflict.
> 
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
> 
> Which hadoop release are you using ?
> 
> Cheers
> 
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com <ma...@gmail.com>> wrote:
> I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free
> 
> I used below commands to check
> 
> netstat -a -t --numeric-ports -p | grep 500
> 
>  
> I have overridden default port 50070 with 50081 but the issue still persists.
> 
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081 <http://172.19.7.160:50081/>
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> 
> hdfs-site.xml
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010 <http://hostname.dc.xx.org:50010/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020 <http://hostname.dc.xx.org:50020/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075 <http://hostname.dc.xx.org:50075/></value>
>   </property>
> Regards,
> RT
> 
> 
> 
> 
> -- 
> Cheers,
> RT


Re: Failed to start datanode due to bind exception

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
/var/run/hdfs-sockets has to be the right permissions. Per default 755 hdfs:hdfs

BR,
 Alexander 

> On 10 Feb 2015, at 19:39, Rajesh Thallam <ra...@gmail.com> wrote:
> 
> There are no contents in the hdfs-sockets directory 
> Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)
> 
> On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yuzhihong@gmail.com <ma...@gmail.com>> wrote:
> The exception came from DomainSocket so using netstat wouldn't reveal the conflict.
> 
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
> 
> Which hadoop release are you using ?
> 
> Cheers
> 
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com <ma...@gmail.com>> wrote:
> I have been repeatedly trying to start datanode but it fails with bind exception saying address is already in use even though port is free
> 
> I used below commands to check
> 
> netstat -a -t --numeric-ports -p | grep 500
> 
>  
> I have overridden default port 50070 with 50081 but the issue still persists.
> 
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081 <http://172.19.7.160:50081/>
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> 
> hdfs-site.xml
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010 <http://hostname.dc.xx.org:50010/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020 <http://hostname.dc.xx.org:50020/></value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075 <http://hostname.dc.xx.org:50075/></value>
>   </property>
> Regards,
> RT
> 
> 
> 
> 
> -- 
> Cheers,
> RT


Re: Failed to start datanode due to bind exception

Posted by Rajesh Thallam <ra...@gmail.com>.
There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com> wrote:

> The exception came from DomainSocket so using netstat wouldn't reveal the
> conflict.
>
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
>
> Which hadoop release are you using ?
>
> Cheers
>
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com
> > wrote:
>
>> I have been repeatedly trying to start datanode but it fails with bind
>> exception saying address is already in use even though port is free
>>
>> I used below commands to check
>>
>> netstat -a -t --numeric-ports -p | grep 500
>>
>>
>>
>> I have overridden default port 50070 with 50081 but the issue still
>> persists.
>>
>> Starting DataNode with maxLockedMemory = 0
>> Opened streaming server at /172.19.7.160:50081
>> Balancing bandwith is 10485760 bytes/s
>> Number threads for balancing is 5
>> Waiting for threadgroup to exit, active threads is 0
>> Shutdown complete.
>> Exception in secureMain
>> java.net.BindException: bind(2) error: Address already in use when trying
>> to bind to '/var/run/hdfs-sockets/datanode'
>>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>>     at
>> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>>     at
>> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
>> Exiting with status 1
>> hdfs-site.xml
>>
>>   <property>
>>     <name>dfs.datanode.address</name>
>>     <value>hostname.dc.xx.org:50010</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.ipc.address</name>
>>     <value>hostname.dc.xx.org:50020</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.http.address</name>
>>     <value>hostname.dc.xx.org:50075</value>
>>   </property>
>>
>> Regards,
>> RT
>>
>
>


-- 
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Rajesh Thallam <ra...@gmail.com>.
There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com> wrote:

> The exception came from DomainSocket so using netstat wouldn't reveal the
> conflict.
>
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
>
> Which hadoop release are you using ?
>
> Cheers
>
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com
> > wrote:
>
>> I have been repeatedly trying to start datanode but it fails with bind
>> exception saying address is already in use even though port is free
>>
>> I used below commands to check
>>
>> netstat -a -t --numeric-ports -p | grep 500
>>
>>
>>
>> I have overridden default port 50070 with 50081 but the issue still
>> persists.
>>
>> Starting DataNode with maxLockedMemory = 0
>> Opened streaming server at /172.19.7.160:50081
>> Balancing bandwith is 10485760 bytes/s
>> Number threads for balancing is 5
>> Waiting for threadgroup to exit, active threads is 0
>> Shutdown complete.
>> Exception in secureMain
>> java.net.BindException: bind(2) error: Address already in use when trying
>> to bind to '/var/run/hdfs-sockets/datanode'
>>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>>     at
>> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>>     at
>> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
>> Exiting with status 1
>> hdfs-site.xml
>>
>>   <property>
>>     <name>dfs.datanode.address</name>
>>     <value>hostname.dc.xx.org:50010</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.ipc.address</name>
>>     <value>hostname.dc.xx.org:50020</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.http.address</name>
>>     <value>hostname.dc.xx.org:50075</value>
>>   </property>
>>
>> Regards,
>> RT
>>
>
>


-- 
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Rajesh Thallam <ra...@gmail.com>.
There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com> wrote:

> The exception came from DomainSocket so using netstat wouldn't reveal the
> conflict.
>
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
>
> Which hadoop release are you using ?
>
> Cheers
>
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com
> > wrote:
>
>> I have been repeatedly trying to start datanode but it fails with bind
>> exception saying address is already in use even though port is free
>>
>> I used below commands to check
>>
>> netstat -a -t --numeric-ports -p | grep 500
>>
>>
>>
>> I have overridden default port 50070 with 50081 but the issue still
>> persists.
>>
>> Starting DataNode with maxLockedMemory = 0
>> Opened streaming server at /172.19.7.160:50081
>> Balancing bandwith is 10485760 bytes/s
>> Number threads for balancing is 5
>> Waiting for threadgroup to exit, active threads is 0
>> Shutdown complete.
>> Exception in secureMain
>> java.net.BindException: bind(2) error: Address already in use when trying
>> to bind to '/var/run/hdfs-sockets/datanode'
>>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>>     at
>> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>>     at
>> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
>> Exiting with status 1
>> hdfs-site.xml
>>
>>   <property>
>>     <name>dfs.datanode.address</name>
>>     <value>hostname.dc.xx.org:50010</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.ipc.address</name>
>>     <value>hostname.dc.xx.org:50020</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.http.address</name>
>>     <value>hostname.dc.xx.org:50075</value>
>>   </property>
>>
>> Regards,
>> RT
>>
>
>


-- 
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Rajesh Thallam <ra...@gmail.com>.
There are no contents in the hdfs-sockets directory
Apache Hadoop Base version if 2.5.0 (using CDH 5.3.0)

On Tue, Feb 10, 2015 at 10:24 AM, Ted Yu <yu...@gmail.com> wrote:

> The exception came from DomainSocket so using netstat wouldn't reveal the
> conflict.
>
> What's the output from:
> ls -l /var/run/hdfs-sockets/datanode
>
> Which hadoop release are you using ?
>
> Cheers
>
> On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <rajesh.thallam@gmail.com
> > wrote:
>
>> I have been repeatedly trying to start datanode but it fails with bind
>> exception saying address is already in use even though port is free
>>
>> I used below commands to check
>>
>> netstat -a -t --numeric-ports -p | grep 500
>>
>>
>>
>> I have overridden default port 50070 with 50081 but the issue still
>> persists.
>>
>> Starting DataNode with maxLockedMemory = 0
>> Opened streaming server at /172.19.7.160:50081
>> Balancing bandwith is 10485760 bytes/s
>> Number threads for balancing is 5
>> Waiting for threadgroup to exit, active threads is 0
>> Shutdown complete.
>> Exception in secureMain
>> java.net.BindException: bind(2) error: Address already in use when trying
>> to bind to '/var/run/hdfs-sockets/datanode'
>>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>>     at
>> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>>     at
>> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>>     at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
>> Exiting with status 1
>> hdfs-site.xml
>>
>>   <property>
>>     <name>dfs.datanode.address</name>
>>     <value>hostname.dc.xx.org:50010</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.ipc.address</name>
>>     <value>hostname.dc.xx.org:50020</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.http.address</name>
>>     <value>hostname.dc.xx.org:50075</value>
>>   </property>
>>
>> Regards,
>> RT
>>
>
>


-- 
Cheers,
RT

Re: Failed to start datanode due to bind exception

Posted by Ted Yu <yu...@gmail.com>.
The exception came from DomainSocket so using netstat wouldn't reveal the
conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>
wrote:

> I have been repeatedly trying to start datanode but it fails with bind
> exception saying address is already in use even though port is free
>
> I used below commands to check
>
> netstat -a -t --numeric-ports -p | grep 500
>
>
>
> I have overridden default port 50070 with 50081 but the issue still
> persists.
>
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying
> to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at
> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> hdfs-site.xml
>
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010</value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020</value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075</value>
>   </property>
>
> Regards,
> RT
>

Re: Failed to start datanode due to bind exception

Posted by Ted Yu <yu...@gmail.com>.
The exception came from DomainSocket so using netstat wouldn't reveal the
conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>
wrote:

> I have been repeatedly trying to start datanode but it fails with bind
> exception saying address is already in use even though port is free
>
> I used below commands to check
>
> netstat -a -t --numeric-ports -p | grep 500
>
>
>
> I have overridden default port 50070 with 50081 but the issue still
> persists.
>
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying
> to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at
> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> hdfs-site.xml
>
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010</value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020</value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075</value>
>   </property>
>
> Regards,
> RT
>

Re: Failed to start datanode due to bind exception

Posted by Ted Yu <yu...@gmail.com>.
The exception came from DomainSocket so using netstat wouldn't reveal the
conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>
wrote:

> I have been repeatedly trying to start datanode but it fails with bind
> exception saying address is already in use even though port is free
>
> I used below commands to check
>
> netstat -a -t --numeric-ports -p | grep 500
>
>
>
> I have overridden default port 50070 with 50081 but the issue still
> persists.
>
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying
> to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at
> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> hdfs-site.xml
>
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010</value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020</value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075</value>
>   </property>
>
> Regards,
> RT
>

Re: Failed to start datanode due to bind exception

Posted by Ted Yu <yu...@gmail.com>.
The exception came from DomainSocket so using netstat wouldn't reveal the
conflict.

What's the output from:
ls -l /var/run/hdfs-sockets/datanode

Which hadoop release are you using ?

Cheers

On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam <ra...@gmail.com>
wrote:

> I have been repeatedly trying to start datanode but it fails with bind
> exception saying address is already in use even though port is free
>
> I used below commands to check
>
> netstat -a -t --numeric-ports -p | grep 500
>
>
>
> I have overridden default port 50070 with 50081 but the issue still
> persists.
>
> Starting DataNode with maxLockedMemory = 0
> Opened streaming server at /172.19.7.160:50081
> Balancing bandwith is 10485760 bytes/s
> Number threads for balancing is 5
> Waiting for threadgroup to exit, active threads is 0
> Shutdown complete.
> Exception in secureMain
> java.net.BindException: bind(2) error: Address already in use when trying
> to bind to '/var/run/hdfs-sockets/datanode'
>     at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)
>     at
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:191)
>     at
> org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:907)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:873)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1066)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:411)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2297)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2184)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2231)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2407)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2431)
> Exiting with status 1
> hdfs-site.xml
>
>   <property>
>     <name>dfs.datanode.address</name>
>     <value>hostname.dc.xx.org:50010</value>
>   </property>
>   <property>
>     <name>dfs.datanode.ipc.address</name>
>     <value>hostname.dc.xx.org:50020</value>
>   </property>
>   <property>
>     <name>dfs.datanode.http.address</name>
>     <value>hostname.dc.xx.org:50075</value>
>   </property>
>
> Regards,
> RT
>