You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by ch huang <ju...@gmail.com> on 2014/09/04 05:09:02 UTC

Datanode can not start with error "Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink"

hi,maillist:

   i have a 10-worknode hadoop cluster using CDH 4.4.0 , one of my datanode
,one of it's disk is full

, when i restart this datanode ,i get error


STARTUP_MSG:   java = 1.7.0_45
************************************************************/
2014-09-04 10:20:00,576 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
handlers for [TERM, HUP, INT]
2014-09-04 10:20:01,457 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2014-09-04 10:20:01,465 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error creating sink
'file'
org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating
plugin: org.apache.hadoop.metrics2.sink.FileSink
        at
org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
        at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:478)
        at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:450)
        at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:429)
        at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180)
        at
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156)
        at
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
        at
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1792)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
Caused by: org.apache.hadoop.metrics2.MetricsException: Error creating
datanode-metrics.out
        at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:53)
        at
org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
        ... 12 more
Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
denied)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileWriter.<init>(FileWriter.java:107)
        at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:48)
        ... 13 more
2014-09-04 10:20:01,488 INFO
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
2014-09-04 10:20:01,546 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 5 second(s).
2014-09-04 10:20:01,546 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
started
2014-09-04 10:20:01,547 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ch15
2014-09-04 10:20:01,569 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
/0.0.0.0:50010
2014-09-04 10:20:01,572 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
10485760 bytes/s
2014-09-04 10:20:01,607 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2014-09-04 10:20:01,657 INFO org.apache.hadoop.http.HttpServer: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context datanode
2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2014-09-04 10:20:01,664 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
0.0.0.0:50075
2014-09-04 10:20:01,668 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = true
2014-09-04 10:20:01,670 INFO org.apache.hadoop.http.HttpServer:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2014-09-04 10:20:01,676 INFO org.apache.hadoop.http.HttpServer:
HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:50075
        at
org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at
org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
        ... 9 more
2014-09-04 10:20:01,677 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads is 0
2014-09-04 10:20:01,677 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.net.BindException: Port in use: 0.0.0.0:50075
        at
org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at
org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
        ... 9 more
2014-09-04 10:20:01,680 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2014-09-04 10:20:01,683 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

/dev/sdg3             213G   59G  144G  30% /
tmpfs                  32G   76K   32G   1% /dev/shm
/dev/sdg1             485M   37M  423M   8% /boot
/dev/sdd              1.8T  1.3T  510G  71% /data/1
/dev/sde              1.8T  1.2T  513G  71% /data/2
/dev/sda              1.8T  1.2T  523G  70% /data/3
/dev/sdb              1.8T  1.2T  540G  70% /data/4
/dev/sdc              1.8T  1.3T  503G  72% /data/5
/dev/sdf              1.8T  1.7T  2.9G 100% /data/6

how i handle this? thanks

Re: Datanode can not start with error "Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink"

Posted by Rich Haase <rd...@gmail.com>.
The reason you can't launch your datanode is:

*2014-09-04 10:20:01,677 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain*
*java.net.BindException: Port in use: 0.0.0.0:50075 <http://0.0.0.0:50075/>*

It appears that you already have a datanode instance listening on port
50075, or you have some other process listening on that port.

The error you mentioned in the subject of your email is a warning message
and is cause by a file system permission issue:

*Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
denied)*



On Wed, Sep 3, 2014 at 9:09 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>
>    i have a 10-worknode hadoop cluster using CDH 4.4.0 , one of my
> datanode ,one of it's disk is full
>
> , when i restart this datanode ,i get error
>
>
> STARTUP_MSG:   java = 1.7.0_45
> ************************************************************/
> 2014-09-04 10:20:00,576 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-09-04 10:20:01,457 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-09-04 10:20:01,465 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error creating sink
> 'file'
> org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating
> plugin: org.apache.hadoop.metrics2.sink.FileSink
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:478)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:450)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:429)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1792)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Error creating
> datanode-metrics.out
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:53)
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
>         ... 12 more
> Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
> denied)
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileWriter.<init>(FileWriter.java:107)
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:48)
>         ... 13 more
> 2014-09-04 10:20:01,488 INFO
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 5 second(s).
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-09-04 10:20:01,547 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ch15
> 2014-09-04 10:20:01,569 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-09-04 10:20:01,572 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 10485760 bytes/s
> 2014-09-04 10:20:01,607 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-09-04 10:20:01,657 INFO org.apache.hadoop.http.HttpServer: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-09-04 10:20:01,664 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 0.0.0.0:50075
> 2014-09-04 10:20:01,668 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = true
> 2014-09-04 10:20:01,670 INFO org.apache.hadoop.http.HttpServer:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-09-04 10:20:01,676 INFO org.apache.hadoop.http.HttpServer:
> HttpServer.start() threw a non Bind IOException
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,677 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2014-09-04 10:20:01,677 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,680 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2014-09-04 10:20:01,683 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> /dev/sdg3             213G   59G  144G  30% /
> tmpfs                  32G   76K   32G   1% /dev/shm
> /dev/sdg1             485M   37M  423M   8% /boot
> /dev/sdd              1.8T  1.3T  510G  71% /data/1
> /dev/sde              1.8T  1.2T  513G  71% /data/2
> /dev/sda              1.8T  1.2T  523G  70% /data/3
> /dev/sdb              1.8T  1.2T  540G  70% /data/4
> /dev/sdc              1.8T  1.3T  503G  72% /data/5
> /dev/sdf              1.8T  1.7T  2.9G 100% /data/6
>
> how i handle this? thanks
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: Datanode can not start with error "Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink"

Posted by Rich Haase <rd...@gmail.com>.
The reason you can't launch your datanode is:

*2014-09-04 10:20:01,677 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain*
*java.net.BindException: Port in use: 0.0.0.0:50075 <http://0.0.0.0:50075/>*

It appears that you already have a datanode instance listening on port
50075, or you have some other process listening on that port.

The error you mentioned in the subject of your email is a warning message
and is cause by a file system permission issue:

*Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
denied)*



On Wed, Sep 3, 2014 at 9:09 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>
>    i have a 10-worknode hadoop cluster using CDH 4.4.0 , one of my
> datanode ,one of it's disk is full
>
> , when i restart this datanode ,i get error
>
>
> STARTUP_MSG:   java = 1.7.0_45
> ************************************************************/
> 2014-09-04 10:20:00,576 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-09-04 10:20:01,457 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-09-04 10:20:01,465 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error creating sink
> 'file'
> org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating
> plugin: org.apache.hadoop.metrics2.sink.FileSink
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:478)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:450)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:429)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1792)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Error creating
> datanode-metrics.out
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:53)
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
>         ... 12 more
> Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
> denied)
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileWriter.<init>(FileWriter.java:107)
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:48)
>         ... 13 more
> 2014-09-04 10:20:01,488 INFO
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 5 second(s).
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-09-04 10:20:01,547 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ch15
> 2014-09-04 10:20:01,569 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-09-04 10:20:01,572 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 10485760 bytes/s
> 2014-09-04 10:20:01,607 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-09-04 10:20:01,657 INFO org.apache.hadoop.http.HttpServer: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-09-04 10:20:01,664 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 0.0.0.0:50075
> 2014-09-04 10:20:01,668 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = true
> 2014-09-04 10:20:01,670 INFO org.apache.hadoop.http.HttpServer:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-09-04 10:20:01,676 INFO org.apache.hadoop.http.HttpServer:
> HttpServer.start() threw a non Bind IOException
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,677 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2014-09-04 10:20:01,677 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,680 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2014-09-04 10:20:01,683 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> /dev/sdg3             213G   59G  144G  30% /
> tmpfs                  32G   76K   32G   1% /dev/shm
> /dev/sdg1             485M   37M  423M   8% /boot
> /dev/sdd              1.8T  1.3T  510G  71% /data/1
> /dev/sde              1.8T  1.2T  513G  71% /data/2
> /dev/sda              1.8T  1.2T  523G  70% /data/3
> /dev/sdb              1.8T  1.2T  540G  70% /data/4
> /dev/sdc              1.8T  1.3T  503G  72% /data/5
> /dev/sdf              1.8T  1.7T  2.9G 100% /data/6
>
> how i handle this? thanks
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: Datanode can not start with error "Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink"

Posted by Rich Haase <rd...@gmail.com>.
The reason you can't launch your datanode is:

*2014-09-04 10:20:01,677 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain*
*java.net.BindException: Port in use: 0.0.0.0:50075 <http://0.0.0.0:50075/>*

It appears that you already have a datanode instance listening on port
50075, or you have some other process listening on that port.

The error you mentioned in the subject of your email is a warning message
and is cause by a file system permission issue:

*Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
denied)*



On Wed, Sep 3, 2014 at 9:09 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>
>    i have a 10-worknode hadoop cluster using CDH 4.4.0 , one of my
> datanode ,one of it's disk is full
>
> , when i restart this datanode ,i get error
>
>
> STARTUP_MSG:   java = 1.7.0_45
> ************************************************************/
> 2014-09-04 10:20:00,576 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-09-04 10:20:01,457 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-09-04 10:20:01,465 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error creating sink
> 'file'
> org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating
> plugin: org.apache.hadoop.metrics2.sink.FileSink
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:478)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:450)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:429)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1792)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Error creating
> datanode-metrics.out
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:53)
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
>         ... 12 more
> Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
> denied)
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileWriter.<init>(FileWriter.java:107)
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:48)
>         ... 13 more
> 2014-09-04 10:20:01,488 INFO
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 5 second(s).
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-09-04 10:20:01,547 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ch15
> 2014-09-04 10:20:01,569 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-09-04 10:20:01,572 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 10485760 bytes/s
> 2014-09-04 10:20:01,607 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-09-04 10:20:01,657 INFO org.apache.hadoop.http.HttpServer: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-09-04 10:20:01,664 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 0.0.0.0:50075
> 2014-09-04 10:20:01,668 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = true
> 2014-09-04 10:20:01,670 INFO org.apache.hadoop.http.HttpServer:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-09-04 10:20:01,676 INFO org.apache.hadoop.http.HttpServer:
> HttpServer.start() threw a non Bind IOException
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,677 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2014-09-04 10:20:01,677 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,680 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2014-09-04 10:20:01,683 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> /dev/sdg3             213G   59G  144G  30% /
> tmpfs                  32G   76K   32G   1% /dev/shm
> /dev/sdg1             485M   37M  423M   8% /boot
> /dev/sdd              1.8T  1.3T  510G  71% /data/1
> /dev/sde              1.8T  1.2T  513G  71% /data/2
> /dev/sda              1.8T  1.2T  523G  70% /data/3
> /dev/sdb              1.8T  1.2T  540G  70% /data/4
> /dev/sdc              1.8T  1.3T  503G  72% /data/5
> /dev/sdf              1.8T  1.7T  2.9G 100% /data/6
>
> how i handle this? thanks
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

Re: Datanode can not start with error "Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink"

Posted by Rich Haase <rd...@gmail.com>.
The reason you can't launch your datanode is:

*2014-09-04 10:20:01,677 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain*
*java.net.BindException: Port in use: 0.0.0.0:50075 <http://0.0.0.0:50075/>*

It appears that you already have a datanode instance listening on port
50075, or you have some other process listening on that port.

The error you mentioned in the subject of your email is a warning message
and is cause by a file system permission issue:

*Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
denied)*



On Wed, Sep 3, 2014 at 9:09 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>
>    i have a 10-worknode hadoop cluster using CDH 4.4.0 , one of my
> datanode ,one of it's disk is full
>
> , when i restart this datanode ,i get error
>
>
> STARTUP_MSG:   java = 1.7.0_45
> ************************************************************/
> 2014-09-04 10:20:00,576 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-09-04 10:20:01,457 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-09-04 10:20:01,465 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error creating sink
> 'file'
> org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating
> plugin: org.apache.hadoop.metrics2.sink.FileSink
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:478)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:450)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:429)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180)
>         at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
>         at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1792)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Error creating
> datanode-metrics.out
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:53)
>         at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
>         ... 12 more
> Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
> denied)
>         at java.io.FileOutputStream.open(Native Method)
>         at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
>         at java.io.FileWriter.<init>(FileWriter.java:107)
>         at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:48)
>         ... 13 more
> 2014-09-04 10:20:01,488 INFO
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 5 second(s).
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-09-04 10:20:01,547 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ch15
> 2014-09-04 10:20:01,569 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-09-04 10:20:01,572 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 10485760 bytes/s
> 2014-09-04 10:20:01,607 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-09-04 10:20:01,657 INFO org.apache.hadoop.http.HttpServer: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-09-04 10:20:01,664 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 0.0.0.0:50075
> 2014-09-04 10:20:01,668 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = true
> 2014-09-04 10:20:01,670 INFO org.apache.hadoop.http.HttpServer:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-09-04 10:20:01,676 INFO org.apache.hadoop.http.HttpServer:
> HttpServer.start() threw a non Bind IOException
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,677 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2014-09-04 10:20:01,677 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.net.BindException: Port in use: 0.0.0.0:50075
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
>         at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:424)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:742)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:444)
>         at sun.nio.ch.Net.bind(Net.java:436)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>         at
> org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
>         ... 9 more
> 2014-09-04 10:20:01,680 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2014-09-04 10:20:01,683 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> /dev/sdg3             213G   59G  144G  30% /
> tmpfs                  32G   76K   32G   1% /dev/shm
> /dev/sdg1             485M   37M  423M   8% /boot
> /dev/sdd              1.8T  1.3T  510G  71% /data/1
> /dev/sde              1.8T  1.2T  513G  71% /data/2
> /dev/sda              1.8T  1.2T  523G  70% /data/3
> /dev/sdb              1.8T  1.2T  540G  70% /data/4
> /dev/sdc              1.8T  1.3T  503G  72% /data/5
> /dev/sdf              1.8T  1.7T  2.9G 100% /data/6
>
> how i handle this? thanks
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."