You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by ch huang <ju...@gmail.com> on 2013/06/26 11:43:37 UTC

datanode can not start

i have running old cluster datanode,so it exist some conflict, i changed
default port, here is my hdfs-site.xml


<configuration>

       <property>

                <name>dfs.name.dir</name>

                <value>/data/hadoopnamespace</value>

        </property>

        <property>

                <name>dfs.data.dir</name>

                <value>/data/hadoopdata</value>

        </property>

        <property>

                <name>dfs.datanode.address</name>

                <value>0.0.0.0:50011</value>

        </property>

        <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.datanode.max.xcievers</name>

                <value>4096</value>

        </property>

        <property>

                <name>dfs.webhdfs.enabled</name>

                <value>true</value>

        </property>

        <property>

                <name>dfs.http.address</name>

                <value>192.168.10.22:50070</value>

        </property>

</configuration>


2013-06-26 17:37:24,923 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = CH34/192.168.10.34
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build =
file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4-r
214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
May
7 14:03:02 PDT 2012
************************************************************/
2013-06-26 17:37:25,335 INFO
org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
set up for Hadoop, not re-installing.
2013-06-26 17:37:25,421 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-06-26 17:37:25,429 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
50011
2013-06-26 17:37:25,430 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is
-1. Opening the listener on 50075
2013-06-26 17:37:25,519 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads is 0
2013-06-26 17:37:25,619 INFO
org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
down all async disk service threads...
2013-06-26 17:37:25,619 INFO
org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
disk service threads have been shut down.
2013-06-26 17:37:25,620 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
2013-06-26 17:37:25,622 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
************************************************************/

RE: datanode can not start

Posted by Sandeep L <sa...@outlook.com>.
Hi Huang,
Just run following command to for which application port 50011 is used.
netstat -anlp | grep 50011 
You will get process name and process id with above command, by using process id just kill the application running on that port.After killing try to start datanode.This solution works in Linux based operating systems.
Thanks,Sandeep.

From: dwivedishashwat@gmail.com
Date: Wed, 26 Jun 2013 15:32:45 +0530
Subject: Re: datanode can not start
To: user@hadoop.apache.org; justlooks@gmail.com

Remove 

<property>
       <name>dfs.datanode.address</name>
       <value>0.0.0.0:50011</value>



</property>


And try.





Thanks & Regards            


	
	
	
	


 ∞

Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:


HI huang,




Some other service is running on the port or you did not stop the datanode service properly.




Regards,Varun Kumar.P






On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

i have running old cluster datanode,so it exist some conflict, i changed default port, here is my hdfs-site.xml





 

<configuration>
       <property>
                <name>dfs.name.dir</name>
                <value>/data/hadoopnamespace</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/data/hadoopdata</value>
        </property>
        <property>
                <name>dfs.datanode.address</name>
                <value>0.0.0.0:50011</value>






        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.datanode.max.xcievers</name>
                <value>4096</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.http.address</name>
                <value>192.168.10.22:50070</value>






        </property>
</configuration>
 
 
2013-06-26 17:37:24,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = CH34/192.168.10.34






STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7 14:03:02 PDT 2012






************************************************************/
2013-06-26 17:37:25,335 INFO org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
2013-06-26 17:37:25,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean






2013-06-26 17:37:25,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at 50011
2013-06-26 17:37:25,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s






2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)






2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2013-06-26 17:37:25,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0






2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.






2013-06-26 17:37:25,620 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)






        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
2013-06-26 17:37:25,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34






************************************************************/



-- 
Regards,
Varun Kumar.P


 		 	   		  

RE: datanode can not start

Posted by Sandeep L <sa...@outlook.com>.
Hi Huang,
Just run following command to for which application port 50011 is used.
netstat -anlp | grep 50011 
You will get process name and process id with above command, by using process id just kill the application running on that port.After killing try to start datanode.This solution works in Linux based operating systems.
Thanks,Sandeep.

From: dwivedishashwat@gmail.com
Date: Wed, 26 Jun 2013 15:32:45 +0530
Subject: Re: datanode can not start
To: user@hadoop.apache.org; justlooks@gmail.com

Remove 

<property>
       <name>dfs.datanode.address</name>
       <value>0.0.0.0:50011</value>



</property>


And try.





Thanks & Regards            


	
	
	
	


 ∞

Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:


HI huang,




Some other service is running on the port or you did not stop the datanode service properly.




Regards,Varun Kumar.P






On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

i have running old cluster datanode,so it exist some conflict, i changed default port, here is my hdfs-site.xml





 

<configuration>
       <property>
                <name>dfs.name.dir</name>
                <value>/data/hadoopnamespace</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/data/hadoopdata</value>
        </property>
        <property>
                <name>dfs.datanode.address</name>
                <value>0.0.0.0:50011</value>






        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.datanode.max.xcievers</name>
                <value>4096</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.http.address</name>
                <value>192.168.10.22:50070</value>






        </property>
</configuration>
 
 
2013-06-26 17:37:24,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = CH34/192.168.10.34






STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7 14:03:02 PDT 2012






************************************************************/
2013-06-26 17:37:25,335 INFO org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
2013-06-26 17:37:25,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean






2013-06-26 17:37:25,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at 50011
2013-06-26 17:37:25,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s






2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)






2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2013-06-26 17:37:25,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0






2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.






2013-06-26 17:37:25,620 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)






        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
2013-06-26 17:37:25,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34






************************************************************/



-- 
Regards,
Varun Kumar.P


 		 	   		  

RE: datanode can not start

Posted by Sandeep L <sa...@outlook.com>.
Hi Huang,
Just run following command to for which application port 50011 is used.
netstat -anlp | grep 50011 
You will get process name and process id with above command, by using process id just kill the application running on that port.After killing try to start datanode.This solution works in Linux based operating systems.
Thanks,Sandeep.

From: dwivedishashwat@gmail.com
Date: Wed, 26 Jun 2013 15:32:45 +0530
Subject: Re: datanode can not start
To: user@hadoop.apache.org; justlooks@gmail.com

Remove 

<property>
       <name>dfs.datanode.address</name>
       <value>0.0.0.0:50011</value>



</property>


And try.





Thanks & Regards            


	
	
	
	


 ∞

Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:


HI huang,




Some other service is running on the port or you did not stop the datanode service properly.




Regards,Varun Kumar.P






On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

i have running old cluster datanode,so it exist some conflict, i changed default port, here is my hdfs-site.xml





 

<configuration>
       <property>
                <name>dfs.name.dir</name>
                <value>/data/hadoopnamespace</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/data/hadoopdata</value>
        </property>
        <property>
                <name>dfs.datanode.address</name>
                <value>0.0.0.0:50011</value>






        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.datanode.max.xcievers</name>
                <value>4096</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.http.address</name>
                <value>192.168.10.22:50070</value>






        </property>
</configuration>
 
 
2013-06-26 17:37:24,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = CH34/192.168.10.34






STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7 14:03:02 PDT 2012






************************************************************/
2013-06-26 17:37:25,335 INFO org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
2013-06-26 17:37:25,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean






2013-06-26 17:37:25,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at 50011
2013-06-26 17:37:25,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s






2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)






2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2013-06-26 17:37:25,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0






2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.






2013-06-26 17:37:25,620 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)






        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
2013-06-26 17:37:25,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34






************************************************************/



-- 
Regards,
Varun Kumar.P


 		 	   		  

RE: datanode can not start

Posted by Sandeep L <sa...@outlook.com>.
Hi Huang,
Just run following command to for which application port 50011 is used.
netstat -anlp | grep 50011 
You will get process name and process id with above command, by using process id just kill the application running on that port.After killing try to start datanode.This solution works in Linux based operating systems.
Thanks,Sandeep.

From: dwivedishashwat@gmail.com
Date: Wed, 26 Jun 2013 15:32:45 +0530
Subject: Re: datanode can not start
To: user@hadoop.apache.org; justlooks@gmail.com

Remove 

<property>
       <name>dfs.datanode.address</name>
       <value>0.0.0.0:50011</value>



</property>


And try.





Thanks & Regards            


	
	
	
	


 ∞

Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:


HI huang,




Some other service is running on the port or you did not stop the datanode service properly.




Regards,Varun Kumar.P






On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

i have running old cluster datanode,so it exist some conflict, i changed default port, here is my hdfs-site.xml





 

<configuration>
       <property>
                <name>dfs.name.dir</name>
                <value>/data/hadoopnamespace</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/data/hadoopdata</value>
        </property>
        <property>
                <name>dfs.datanode.address</name>
                <value>0.0.0.0:50011</value>






        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.datanode.max.xcievers</name>
                <value>4096</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.http.address</name>
                <value>192.168.10.22:50070</value>






        </property>
</configuration>
 
 
2013-06-26 17:37:24,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = CH34/192.168.10.34






STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7 14:03:02 PDT 2012






************************************************************/
2013-06-26 17:37:25,335 INFO org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
2013-06-26 17:37:25,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean






2013-06-26 17:37:25,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at 50011
2013-06-26 17:37:25,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s






2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)






2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2013-06-26 17:37:25,519 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0






2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.






2013-06-26 17:37:25,620 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)






        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)






        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
2013-06-26 17:37:25,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34






************************************************************/



-- 
Regards,
Varun Kumar.P


 		 	   		  

Re: datanode can not start

Posted by shashwat shriparv <dw...@gmail.com>.
Remove

<property>

       <name>dfs.datanode.address</name>

       <value>0.0.0.0:50011</value>

</property>


And try.






*Thanks & Regards    *

∞
Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:

> HI huang,
> *
> *
> *Some other service is running on the port or you did not stop the
> datanode service properly.*
> *
> *
> *Regards,*
> *Varun Kumar.P
> *
>
>
> On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:
>
>> i have running old cluster datanode,so it exist some conflict, i changed
>> default port, here is my hdfs-site.xml
>>
>>
>> <configuration>
>>
>>        <property>
>>
>>                 <name>dfs.name.dir</name>
>>
>>                 <value>/data/hadoopnamespace</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.data.dir</name>
>>
>>                 <value>/data/hadoopdata</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.address</name>
>>
>>                 <value>0.0.0.0:50011</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.permissions</name>
>>
>>                 <value>false</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.max.xcievers</name>
>>
>>                 <value>4096</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.webhdfs.enabled</name>
>>
>>                 <value>true</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.http.address</name>
>>
>>                 <value>192.168.10.22:50070</value>
>>
>>         </property>
>>
>> </configuration>
>>
>>
>> 2013-06-26 17:37:24,923 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = CH34/192.168.10.34
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:03:02 PDT 2012
>> ************************************************************/
>> 2013-06-26 17:37:25,335 INFO
>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>> set up for Hadoop, not re-installing.
>> 2013-06-26 17:37:25,421 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2013-06-26 17:37:25,429 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> 50011
>> 2013-06-26 17:37:25,430 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
>> global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1. Opening the listener on 50075
>> 2013-06-26 17:37:25,519 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
>> exit, active threads is 0
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
>> down all async disk service threads...
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
>> disk service threads have been shut down.
>> 2013-06-26 17:37:25,620 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
>> Address already in use
>>         at sun.nio.ch.Net.bind(Native Method)
>>         at
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>>         at
>> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>>         at
>> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
>> 2013-06-26 17:37:25,622 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
>> ************************************************************/
>>
>
>
>
> --
> Regards,
> Varun Kumar.P
>

Re: datanode can not start

Posted by shashwat shriparv <dw...@gmail.com>.
Remove

<property>

       <name>dfs.datanode.address</name>

       <value>0.0.0.0:50011</value>

</property>


And try.






*Thanks & Regards    *

∞
Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:

> HI huang,
> *
> *
> *Some other service is running on the port or you did not stop the
> datanode service properly.*
> *
> *
> *Regards,*
> *Varun Kumar.P
> *
>
>
> On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:
>
>> i have running old cluster datanode,so it exist some conflict, i changed
>> default port, here is my hdfs-site.xml
>>
>>
>> <configuration>
>>
>>        <property>
>>
>>                 <name>dfs.name.dir</name>
>>
>>                 <value>/data/hadoopnamespace</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.data.dir</name>
>>
>>                 <value>/data/hadoopdata</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.address</name>
>>
>>                 <value>0.0.0.0:50011</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.permissions</name>
>>
>>                 <value>false</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.max.xcievers</name>
>>
>>                 <value>4096</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.webhdfs.enabled</name>
>>
>>                 <value>true</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.http.address</name>
>>
>>                 <value>192.168.10.22:50070</value>
>>
>>         </property>
>>
>> </configuration>
>>
>>
>> 2013-06-26 17:37:24,923 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = CH34/192.168.10.34
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:03:02 PDT 2012
>> ************************************************************/
>> 2013-06-26 17:37:25,335 INFO
>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>> set up for Hadoop, not re-installing.
>> 2013-06-26 17:37:25,421 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2013-06-26 17:37:25,429 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> 50011
>> 2013-06-26 17:37:25,430 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
>> global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1. Opening the listener on 50075
>> 2013-06-26 17:37:25,519 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
>> exit, active threads is 0
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
>> down all async disk service threads...
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
>> disk service threads have been shut down.
>> 2013-06-26 17:37:25,620 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
>> Address already in use
>>         at sun.nio.ch.Net.bind(Native Method)
>>         at
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>>         at
>> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>>         at
>> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
>> 2013-06-26 17:37:25,622 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
>> ************************************************************/
>>
>
>
>
> --
> Regards,
> Varun Kumar.P
>

Re: datanode can not start

Posted by shashwat shriparv <dw...@gmail.com>.
Remove

<property>

       <name>dfs.datanode.address</name>

       <value>0.0.0.0:50011</value>

</property>


And try.






*Thanks & Regards    *

∞
Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:

> HI huang,
> *
> *
> *Some other service is running on the port or you did not stop the
> datanode service properly.*
> *
> *
> *Regards,*
> *Varun Kumar.P
> *
>
>
> On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:
>
>> i have running old cluster datanode,so it exist some conflict, i changed
>> default port, here is my hdfs-site.xml
>>
>>
>> <configuration>
>>
>>        <property>
>>
>>                 <name>dfs.name.dir</name>
>>
>>                 <value>/data/hadoopnamespace</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.data.dir</name>
>>
>>                 <value>/data/hadoopdata</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.address</name>
>>
>>                 <value>0.0.0.0:50011</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.permissions</name>
>>
>>                 <value>false</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.max.xcievers</name>
>>
>>                 <value>4096</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.webhdfs.enabled</name>
>>
>>                 <value>true</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.http.address</name>
>>
>>                 <value>192.168.10.22:50070</value>
>>
>>         </property>
>>
>> </configuration>
>>
>>
>> 2013-06-26 17:37:24,923 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = CH34/192.168.10.34
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:03:02 PDT 2012
>> ************************************************************/
>> 2013-06-26 17:37:25,335 INFO
>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>> set up for Hadoop, not re-installing.
>> 2013-06-26 17:37:25,421 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2013-06-26 17:37:25,429 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> 50011
>> 2013-06-26 17:37:25,430 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
>> global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1. Opening the listener on 50075
>> 2013-06-26 17:37:25,519 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
>> exit, active threads is 0
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
>> down all async disk service threads...
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
>> disk service threads have been shut down.
>> 2013-06-26 17:37:25,620 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
>> Address already in use
>>         at sun.nio.ch.Net.bind(Native Method)
>>         at
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>>         at
>> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>>         at
>> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
>> 2013-06-26 17:37:25,622 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
>> ************************************************************/
>>
>
>
>
> --
> Regards,
> Varun Kumar.P
>

Re: datanode can not start

Posted by shashwat shriparv <dw...@gmail.com>.
Remove

<property>

       <name>dfs.datanode.address</name>

       <value>0.0.0.0:50011</value>

</property>


And try.






*Thanks & Regards    *

∞
Shashwat Shriparv



On Wed, Jun 26, 2013 at 3:29 PM, varun kumar <va...@gmail.com> wrote:

> HI huang,
> *
> *
> *Some other service is running on the port or you did not stop the
> datanode service properly.*
> *
> *
> *Regards,*
> *Varun Kumar.P
> *
>
>
> On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:
>
>> i have running old cluster datanode,so it exist some conflict, i changed
>> default port, here is my hdfs-site.xml
>>
>>
>> <configuration>
>>
>>        <property>
>>
>>                 <name>dfs.name.dir</name>
>>
>>                 <value>/data/hadoopnamespace</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.data.dir</name>
>>
>>                 <value>/data/hadoopdata</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.address</name>
>>
>>                 <value>0.0.0.0:50011</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.permissions</name>
>>
>>                 <value>false</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.datanode.max.xcievers</name>
>>
>>                 <value>4096</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.webhdfs.enabled</name>
>>
>>                 <value>true</value>
>>
>>         </property>
>>
>>         <property>
>>
>>                 <name>dfs.http.address</name>
>>
>>                 <value>192.168.10.22:50070</value>
>>
>>         </property>
>>
>> </configuration>
>>
>>
>> 2013-06-26 17:37:24,923 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = CH34/192.168.10.34
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u4
>> STARTUP_MSG:   build =
>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
>> 14:03:02 PDT 2012
>> ************************************************************/
>> 2013-06-26 17:37:25,335 INFO
>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>> set up for Hadoop, not re-installing.
>> 2013-06-26 17:37:25,421 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>> FSDatasetStatusMBean
>> 2013-06-26 17:37:25,429 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> 50011
>> 2013-06-26 17:37:25,430 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
>> global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1. Opening the listener on 50075
>> 2013-06-26 17:37:25,519 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
>> exit, active threads is 0
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
>> down all async disk service threads...
>> 2013-06-26 17:37:25,619 INFO
>> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
>> disk service threads have been shut down.
>> 2013-06-26 17:37:25,620 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
>> Address already in use
>>         at sun.nio.ch.Net.bind(Native Method)
>>         at
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>>         at
>> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>>         at
>> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
>> 2013-06-26 17:37:25,622 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
>> ************************************************************/
>>
>
>
>
> --
> Regards,
> Varun Kumar.P
>

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P

Re: datanode can not start

Posted by varun kumar <va...@gmail.com>.
HI huang,
*
*
*Some other service is running on the port or you did not stop the datanode
service properly.*
*
*
*Regards,*
*Varun Kumar.P
*


On Wed, Jun 26, 2013 at 3:13 PM, ch huang <ju...@gmail.com> wrote:

> i have running old cluster datanode,so it exist some conflict, i changed
> default port, here is my hdfs-site.xml
>
>
> <configuration>
>
>        <property>
>
>                 <name>dfs.name.dir</name>
>
>                 <value>/data/hadoopnamespace</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.data.dir</name>
>
>                 <value>/data/hadoopdata</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.address</name>
>
>                 <value>0.0.0.0:50011</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.permissions</name>
>
>                 <value>false</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.datanode.max.xcievers</name>
>
>                 <value>4096</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.webhdfs.enabled</name>
>
>                 <value>true</value>
>
>         </property>
>
>         <property>
>
>                 <name>dfs.http.address</name>
>
>                 <value>192.168.10.22:50070</value>
>
>         </property>
>
> </configuration>
>
>
> 2013-06-26 17:37:24,923 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = CH34/192.168.10.34
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2-cdh3u4
> STARTUP_MSG:   build =
> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May  7
> 14:03:02 PDT 2012
> ************************************************************/
> 2013-06-26 17:37:25,335 INFO
> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
> set up for Hadoop, not re-installing.
> 2013-06-26 17:37:25,421 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2013-06-26 17:37:25,429 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> 50011
> 2013-06-26 17:37:25,430 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added
> global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50075
> 2013-06-26 17:37:25,519 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2013-06-26 17:37:25,619 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2013-06-26 17:37:25,620 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
> Address already in use
>         at sun.nio.ch.Net.bind(Native Method)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:303)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744)
> 2013-06-26 17:37:25,622 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
> ************************************************************/
>



-- 
Regards,
Varun Kumar.P