You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Colin Kincaid Williams <di...@uw.edu> on 2014/10/04 03:57:22 UTC

datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

We had a datanode go down, and our datacenter guy swapped out the disk. We
had moved to using UUIDs in the /etc/fstab, but he wanted to use the
/dev/id format. He didn't backup the fstab, however I'm not sure that's the
issue.

I am reading in the log below that the namenode has a lock on the disk? I
don't know how that works. I thought the lockfile would belong to the
datanode itself. How do I remove the lock from the namenode to bring the
datanode back up?

If that's not the issue, how can I bring the datanode back up? Help would
be greatly appreciated.




2014-10-03 18:28:18,121 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.3.0-cdh5.0.1
STARTUP_MSG:   classpath =
/etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
2014-05-06T19:01Z
STARTUP_MSG:   java = 1.7.0_60
************************************************************/
2014-10-03 18:28:18,163 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
handlers for [TERM, HUP, INT]
2014-10-03 18:28:20,285 WARN org.apache.hadoop.metrics2.impl.MetricsConfig:
Cannot locate configuration: tried
hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
2014-10-03 18:28:20,511 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2014-10-03 18:28:20,511 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
started
2014-10-03 18:28:20,516 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:20,518 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
maxLockedMemory = 0
2014-10-03 18:28:20,557 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
/0.0.0.0:50010
2014-10-03 18:28:20,562 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
request log for http.requests.datanode is not defined
2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context datanode
2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50075
2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager: Using
callQueue class java.util.concurrent.LinkedBlockingQueue
2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting Socket
Reader #1 for port 50020
2014-10-03 18:28:22,269 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
0.0.0.0:50020
2014-10-03 18:28:22,295 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
for nameservices: whprod
2014-10-03 18:28:22,358 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
for nameservices: whprod
2014-10-03 18:28:22,369 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
(Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
10.51.28.141:8020 starting to offer service
2014-10-03 18:28:22,389 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
(Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
10.51.28.140:8020 starting to offer service
2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2014-10-03 18:28:22,993 INFO org.apache.hadoop.hdfs.server.common.Storage:
Data-node version: -55 and name-node layout version: -55
2014-10-03 18:28:23,008 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data1/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,019 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data10/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,028 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data11/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,037 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data2/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,039 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data3/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,047 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data4/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,056 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data5/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,058 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data6/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,066 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data7/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data8/dfs/in_use.lock acquired by nodename
3489@us3sm2hb027r09.comp.prod.local
2014-10-03 18:28:23,085 ERROR org.apache.hadoop.hdfs.server.common.Storage:
It appears that another namenode 3489@us3sm2hb027r09.comp.prod.local has
already locked the storage directory
2014-10-03 18:28:23,085 INFO org.apache.hadoop.hdfs.server.common.Storage:
Cannot lock storage /data9/dfs. The directory is already locked
2014-10-03 18:28:23,086 WARN org.apache.hadoop.hdfs.server.common.Storage:
Ignoring storage directory /data9/dfs due to an exception
java.io.IOException: Cannot lock storage /data9/dfs. The directory is
already locked
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:745)
2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
Analyzing storage directories for bpid
BP-1256332750-10.51.28.140-1408661299811
2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:23,820 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
2014-10-03 18:28:23,874 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool <registering> (Datanode Uuid unassigned) service to
us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
volumes - current valid volumes: 10, volumes configured: 11, volumes
failed: 1, volume failures tolerated: 0
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:745)
2014-10-03 18:28:24,350 INFO org.apache.hadoop.hdfs.server.common.Storage:
Analyzing storage directories for bpid
BP-1256332750-10.51.28.140-1408661299811
2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
Locking is disabled
2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
Restored 0 block files from trash.
2014-10-03 18:28:24,366 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
2014-10-03 18:28:24,367 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool <registering> (Datanode Uuid unassigned) service to
us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
volumes - current valid volumes: 10, volumes configured: 11, volumes
failed: 1, volume failures tolerated: 0
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:745)
2014-10-03 18:28:24,367 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
for: Block pool <registering> (Datanode Uuid unassigned) service to
us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
2014-10-03 18:28:24,368 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
for: Block pool <registering> (Datanode Uuid unassigned) service to
us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
2014-10-03 18:28:24,469 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
service not yet registered with NN
java.lang.Exception: trace
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at
org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:745)
2014-10-03 18:28:24,470 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
<registering> (Datanode Uuid unassigned)
2014-10-03 18:28:24,470 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
service not yet registered with NN
java.lang.Exception: trace
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:745)
2014-10-03 18:28:26,481 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 0
2014-10-03 18:28:26,487 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
10.51.28.172
************************************************************/

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Colin Kincaid Williams <di...@uw.edu>.
I could find no lockfile on the datanode, in any of the data dirs...
Therefore I cannot try "the suggested fix"

On Fri, Oct 3, 2014 at 9:14 PM, Pradeep Gollakota <pr...@gmail.com>
wrote:

> Looks like you're facing the same problem as this SO.
> http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto
>
> Try the suggested fix.
>
> On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
> wrote:
>
>> We had a datanode go down, and our datacenter guy swapped out the disk.
>> We had moved to using UUIDs in the /etc/fstab, but he wanted to use the
>> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
>> issue.
>>
>> I am reading in the log below that the namenode has a lock on the disk? I
>> don't know how that works. I thought the lockfile would belong to the
>> datanode itself. How do I remove the lock from the namenode to bring the
>> datanode back up?
>>
>> If that's not the issue, how can I bring the datanode back up? Help would
>> be greatly appreciated.
>>
>>
>>
>>
>> 2014-10-03 18:28:18,121 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
>> STARTUP_MSG:   classpath =
>> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
>> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
>> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
>> 2014-05-06T19:01Z
>> STARTUP_MSG:   java = 1.7.0_60
>> ************************************************************/
>> 2014-10-03 18:28:18,163 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
>> handlers for [TERM, HUP, INT]
>> 2014-10-03 18:28:20,285 WARN
>> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
>> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 2014-10-03 18:28:20,516 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
>> us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:20,518 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
>> maxLockedMemory = 0
>> 2014-10-03 18:28:20,557 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> /0.0.0.0:50010
>> 2014-10-03 18:28:20,562 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
>> request log for http.requests.datanode is not defined
>> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
>> global filter 'safety'
>> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context datanode
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context static
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context logs
>> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
>> addJerseyResourcePackage:
>> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
>> pathSpec=/webhdfs/v1/*
>> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
>> bound to port 50075
>> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
>> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager:
>> Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting
>> Socket Reader #1 for port 50020
>> 2014-10-03 18:28:22,269 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
>> 0.0.0.0:50020
>> 2014-10-03 18:28:22,295 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
>> for nameservices: whprod
>> 2014-10-03 18:28:22,358 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
>> for nameservices: whprod
>> 2014-10-03 18:28:22,369 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
>> 10.51.28.141:8020 starting to offer service
>> 2014-10-03 18:28:22,389 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
>> 10.51.28.140:8020 starting to offer service
>> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2014-10-03 18:28:22,993 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and
>> name-node layout version: -55
>> 2014-10-03 18:28:23,008 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data1/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,019 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data10/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,028 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data11/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,037 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data2/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,039 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data3/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,047 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data4/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,056 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data5/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,058 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data6/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,066 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data7/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,083 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data8/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,085 ERROR
>> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
>> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
>> storage directory
>> 2014-10-03 18:28:23,085 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage
>> /data9/dfs. The directory is already locked
>> 2014-10-03 18:28:23,086 WARN
>> org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory
>> /data9/dfs due to an exception
>> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
>> already locked
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,820 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:23,874 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,350 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,366 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:24,367 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,367 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> 2014-10-03 18:28:24,368 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> 2014-10-03 18:28:24,469 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,470 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> <registering> (Datanode Uuid unassigned)
>> 2014-10-03 18:28:24,470 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:26,481 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-10-03 18:28:26,487 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
>> 10.51.28.172
>> ************************************************************/
>>
>>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Colin Kincaid Williams <di...@uw.edu>.
I could find no lockfile on the datanode, in any of the data dirs...
Therefore I cannot try "the suggested fix"

On Fri, Oct 3, 2014 at 9:14 PM, Pradeep Gollakota <pr...@gmail.com>
wrote:

> Looks like you're facing the same problem as this SO.
> http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto
>
> Try the suggested fix.
>
> On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
> wrote:
>
>> We had a datanode go down, and our datacenter guy swapped out the disk.
>> We had moved to using UUIDs in the /etc/fstab, but he wanted to use the
>> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
>> issue.
>>
>> I am reading in the log below that the namenode has a lock on the disk? I
>> don't know how that works. I thought the lockfile would belong to the
>> datanode itself. How do I remove the lock from the namenode to bring the
>> datanode back up?
>>
>> If that's not the issue, how can I bring the datanode back up? Help would
>> be greatly appreciated.
>>
>>
>>
>>
>> 2014-10-03 18:28:18,121 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
>> STARTUP_MSG:   classpath =
>> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
>> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
>> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
>> 2014-05-06T19:01Z
>> STARTUP_MSG:   java = 1.7.0_60
>> ************************************************************/
>> 2014-10-03 18:28:18,163 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
>> handlers for [TERM, HUP, INT]
>> 2014-10-03 18:28:20,285 WARN
>> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
>> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 2014-10-03 18:28:20,516 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
>> us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:20,518 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
>> maxLockedMemory = 0
>> 2014-10-03 18:28:20,557 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> /0.0.0.0:50010
>> 2014-10-03 18:28:20,562 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
>> request log for http.requests.datanode is not defined
>> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
>> global filter 'safety'
>> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context datanode
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context static
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context logs
>> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
>> addJerseyResourcePackage:
>> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
>> pathSpec=/webhdfs/v1/*
>> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
>> bound to port 50075
>> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
>> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager:
>> Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting
>> Socket Reader #1 for port 50020
>> 2014-10-03 18:28:22,269 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
>> 0.0.0.0:50020
>> 2014-10-03 18:28:22,295 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
>> for nameservices: whprod
>> 2014-10-03 18:28:22,358 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
>> for nameservices: whprod
>> 2014-10-03 18:28:22,369 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
>> 10.51.28.141:8020 starting to offer service
>> 2014-10-03 18:28:22,389 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
>> 10.51.28.140:8020 starting to offer service
>> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2014-10-03 18:28:22,993 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and
>> name-node layout version: -55
>> 2014-10-03 18:28:23,008 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data1/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,019 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data10/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,028 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data11/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,037 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data2/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,039 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data3/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,047 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data4/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,056 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data5/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,058 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data6/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,066 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data7/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,083 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data8/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,085 ERROR
>> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
>> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
>> storage directory
>> 2014-10-03 18:28:23,085 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage
>> /data9/dfs. The directory is already locked
>> 2014-10-03 18:28:23,086 WARN
>> org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory
>> /data9/dfs due to an exception
>> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
>> already locked
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,820 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:23,874 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,350 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,366 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:24,367 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,367 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> 2014-10-03 18:28:24,368 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> 2014-10-03 18:28:24,469 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,470 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> <registering> (Datanode Uuid unassigned)
>> 2014-10-03 18:28:24,470 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:26,481 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-10-03 18:28:26,487 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
>> 10.51.28.172
>> ************************************************************/
>>
>>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Colin Kincaid Williams <di...@uw.edu>.
I could find no lockfile on the datanode, in any of the data dirs...
Therefore I cannot try "the suggested fix"

On Fri, Oct 3, 2014 at 9:14 PM, Pradeep Gollakota <pr...@gmail.com>
wrote:

> Looks like you're facing the same problem as this SO.
> http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto
>
> Try the suggested fix.
>
> On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
> wrote:
>
>> We had a datanode go down, and our datacenter guy swapped out the disk.
>> We had moved to using UUIDs in the /etc/fstab, but he wanted to use the
>> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
>> issue.
>>
>> I am reading in the log below that the namenode has a lock on the disk? I
>> don't know how that works. I thought the lockfile would belong to the
>> datanode itself. How do I remove the lock from the namenode to bring the
>> datanode back up?
>>
>> If that's not the issue, how can I bring the datanode back up? Help would
>> be greatly appreciated.
>>
>>
>>
>>
>> 2014-10-03 18:28:18,121 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
>> STARTUP_MSG:   classpath =
>> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
>> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
>> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
>> 2014-05-06T19:01Z
>> STARTUP_MSG:   java = 1.7.0_60
>> ************************************************************/
>> 2014-10-03 18:28:18,163 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
>> handlers for [TERM, HUP, INT]
>> 2014-10-03 18:28:20,285 WARN
>> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
>> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 2014-10-03 18:28:20,516 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
>> us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:20,518 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
>> maxLockedMemory = 0
>> 2014-10-03 18:28:20,557 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> /0.0.0.0:50010
>> 2014-10-03 18:28:20,562 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
>> request log for http.requests.datanode is not defined
>> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
>> global filter 'safety'
>> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context datanode
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context static
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context logs
>> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
>> addJerseyResourcePackage:
>> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
>> pathSpec=/webhdfs/v1/*
>> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
>> bound to port 50075
>> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
>> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager:
>> Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting
>> Socket Reader #1 for port 50020
>> 2014-10-03 18:28:22,269 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
>> 0.0.0.0:50020
>> 2014-10-03 18:28:22,295 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
>> for nameservices: whprod
>> 2014-10-03 18:28:22,358 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
>> for nameservices: whprod
>> 2014-10-03 18:28:22,369 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
>> 10.51.28.141:8020 starting to offer service
>> 2014-10-03 18:28:22,389 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
>> 10.51.28.140:8020 starting to offer service
>> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2014-10-03 18:28:22,993 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and
>> name-node layout version: -55
>> 2014-10-03 18:28:23,008 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data1/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,019 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data10/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,028 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data11/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,037 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data2/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,039 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data3/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,047 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data4/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,056 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data5/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,058 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data6/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,066 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data7/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,083 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data8/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,085 ERROR
>> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
>> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
>> storage directory
>> 2014-10-03 18:28:23,085 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage
>> /data9/dfs. The directory is already locked
>> 2014-10-03 18:28:23,086 WARN
>> org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory
>> /data9/dfs due to an exception
>> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
>> already locked
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,820 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:23,874 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,350 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,366 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:24,367 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,367 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> 2014-10-03 18:28:24,368 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> 2014-10-03 18:28:24,469 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,470 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> <registering> (Datanode Uuid unassigned)
>> 2014-10-03 18:28:24,470 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:26,481 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-10-03 18:28:26,487 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
>> 10.51.28.172
>> ************************************************************/
>>
>>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Colin Kincaid Williams <di...@uw.edu>.
I could find no lockfile on the datanode, in any of the data dirs...
Therefore I cannot try "the suggested fix"

On Fri, Oct 3, 2014 at 9:14 PM, Pradeep Gollakota <pr...@gmail.com>
wrote:

> Looks like you're facing the same problem as this SO.
> http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto
>
> Try the suggested fix.
>
> On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
> wrote:
>
>> We had a datanode go down, and our datacenter guy swapped out the disk.
>> We had moved to using UUIDs in the /etc/fstab, but he wanted to use the
>> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
>> issue.
>>
>> I am reading in the log below that the namenode has a lock on the disk? I
>> don't know how that works. I thought the lockfile would belong to the
>> datanode itself. How do I remove the lock from the namenode to bring the
>> datanode back up?
>>
>> If that's not the issue, how can I bring the datanode back up? Help would
>> be greatly appreciated.
>>
>>
>>
>>
>> 2014-10-03 18:28:18,121 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
>> STARTUP_MSG:   classpath =
>> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
>> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
>> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
>> 2014-05-06T19:01Z
>> STARTUP_MSG:   java = 1.7.0_60
>> ************************************************************/
>> 2014-10-03 18:28:18,163 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
>> handlers for [TERM, HUP, INT]
>> 2014-10-03 18:28:20,285 WARN
>> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
>> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-10-03 18:28:20,511 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 2014-10-03 18:28:20,516 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
>> us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:20,518 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
>> maxLockedMemory = 0
>> 2014-10-03 18:28:20,557 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
>> /0.0.0.0:50010
>> 2014-10-03 18:28:20,562 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
>> 1048576 bytes/s
>> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
>> request log for http.requests.datanode is not defined
>> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
>> global filter 'safety'
>> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context datanode
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context static
>> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
>> filter static_user_filter
>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
>> context logs
>> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
>> addJerseyResourcePackage:
>> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
>> pathSpec=/webhdfs/v1/*
>> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
>> bound to port 50075
>> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
>> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50075
>> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager:
>> Using callQueue class java.util.concurrent.LinkedBlockingQueue
>> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting
>> Socket Reader #1 for port 50020
>> 2014-10-03 18:28:22,269 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
>> 0.0.0.0:50020
>> 2014-10-03 18:28:22,295 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
>> for nameservices: whprod
>> 2014-10-03 18:28:22,358 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
>> for nameservices: whprod
>> 2014-10-03 18:28:22,369 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
>> 10.51.28.141:8020 starting to offer service
>> 2014-10-03 18:28:22,389 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
>> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
>> 10.51.28.140:8020 starting to offer service
>> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 50020: starting
>> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2014-10-03 18:28:22,993 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and
>> name-node layout version: -55
>> 2014-10-03 18:28:23,008 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data1/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,019 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data10/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,028 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data11/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,037 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data2/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,039 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data3/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,047 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data4/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,056 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data5/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,058 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data6/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,066 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data7/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,083 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data8/dfs/in_use.lock acquired by nodename
>> 3489@us3sm2hb027r09.comp.prod.local
>> 2014-10-03 18:28:23,085 ERROR
>> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
>> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
>> storage directory
>> 2014-10-03 18:28:23,085 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage
>> /data9/dfs. The directory is already locked
>> 2014-10-03 18:28:23,086 WARN
>> org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory
>> /data9/dfs due to an exception
>> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
>> already locked
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
>> at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:23,810 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,812 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,813 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,814 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,815 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,816 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:23,820 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:23,874 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,350 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories
>> for bpid BP-1256332750-10.51.28.140-1408661299811
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,351 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,352 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,353 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,354 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,355 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from
>> trash.
>> 2014-10-03 18:28:24,366 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
>> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
>> 2014-10-03 18:28:24,367 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
>> volumes - current valid volumes: 10, volumes configured: 11, volumes
>> failed: 1, volume failures tolerated: 0
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,367 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
>> 2014-10-03 18:28:24,368 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool <registering> (Datanode Uuid unassigned) service to
>> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
>> 2014-10-03 18:28:24,469 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:24,470 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> <registering> (Datanode Uuid unassigned)
>> 2014-10-03 18:28:24,470 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
>> service not yet registered with NN
>> java.lang.Exception: trace
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-10-03 18:28:26,481 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-10-03 18:28:26,487 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
>> 10.51.28.172
>> ************************************************************/
>>
>>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Pradeep Gollakota <pr...@gmail.com>.
Looks like you're facing the same problem as this SO.
http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto

Try the suggested fix.

On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
wrote:

> We had a datanode go down, and our datacenter guy swapped out the disk. We
> had moved to using UUIDs in the /etc/fstab, but he wanted to use the
> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
> issue.
>
> I am reading in the log below that the namenode has a lock on the disk? I
> don't know how that works. I thought the lockfile would belong to the
> datanode itself. How do I remove the lock from the namenode to bring the
> datanode back up?
>
> If that's not the issue, how can I bring the datanode back up? Help would
> be greatly appreciated.
>
>
>
>
> 2014-10-03 18:28:18,121 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
> 2014-05-06T19:01Z
> STARTUP_MSG:   java = 1.7.0_60
> ************************************************************/
> 2014-10-03 18:28:18,163 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-10-03 18:28:20,285 WARN
> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-10-03 18:28:20,516 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
> us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:20,518 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
> maxLockedMemory = 0
> 2014-10-03 18:28:20,557 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-10-03 18:28:20,562 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.datanode is not defined
> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50075
> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager: Using
> callQueue class java.util.concurrent.LinkedBlockingQueue
> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting Socket
> Reader #1 for port 50020
> 2014-10-03 18:28:22,269 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
> 0.0.0.0:50020
> 2014-10-03 18:28:22,295 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
> for nameservices: whprod
> 2014-10-03 18:28:22,358 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
> for nameservices: whprod
> 2014-10-03 18:28:22,369 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
> 10.51.28.141:8020 starting to offer service
> 2014-10-03 18:28:22,389 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
> 10.51.28.140:8020 starting to offer service
> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2014-10-03 18:28:22,993 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Data-node version: -55 and name-node layout version: -55
> 2014-10-03 18:28:23,008 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data1/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,019 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data10/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,028 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data11/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,037 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data2/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,039 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data3/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,047 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data4/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,056 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data5/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,058 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data6/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,066 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data7/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data8/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,085 ERROR
> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
> storage directory
> 2014-10-03 18:28:23,085 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Cannot lock storage /data9/dfs. The directory is already locked
> 2014-10-03 18:28:23,086 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Ignoring storage directory /data9/dfs due to an exception
> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
> already locked
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,820 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:23,874 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,350 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,366 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:24,367 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,367 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> 2014-10-03 18:28:24,368 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> 2014-10-03 18:28:24,469 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,470 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> <registering> (Datanode Uuid unassigned)
> 2014-10-03 18:28:24,470 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:26,481 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-10-03 18:28:26,487 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
> 10.51.28.172
> ************************************************************/
>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Pradeep Gollakota <pr...@gmail.com>.
Looks like you're facing the same problem as this SO.
http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto

Try the suggested fix.

On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
wrote:

> We had a datanode go down, and our datacenter guy swapped out the disk. We
> had moved to using UUIDs in the /etc/fstab, but he wanted to use the
> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
> issue.
>
> I am reading in the log below that the namenode has a lock on the disk? I
> don't know how that works. I thought the lockfile would belong to the
> datanode itself. How do I remove the lock from the namenode to bring the
> datanode back up?
>
> If that's not the issue, how can I bring the datanode back up? Help would
> be greatly appreciated.
>
>
>
>
> 2014-10-03 18:28:18,121 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
> 2014-05-06T19:01Z
> STARTUP_MSG:   java = 1.7.0_60
> ************************************************************/
> 2014-10-03 18:28:18,163 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-10-03 18:28:20,285 WARN
> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-10-03 18:28:20,516 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
> us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:20,518 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
> maxLockedMemory = 0
> 2014-10-03 18:28:20,557 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-10-03 18:28:20,562 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.datanode is not defined
> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50075
> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager: Using
> callQueue class java.util.concurrent.LinkedBlockingQueue
> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting Socket
> Reader #1 for port 50020
> 2014-10-03 18:28:22,269 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
> 0.0.0.0:50020
> 2014-10-03 18:28:22,295 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
> for nameservices: whprod
> 2014-10-03 18:28:22,358 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
> for nameservices: whprod
> 2014-10-03 18:28:22,369 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
> 10.51.28.141:8020 starting to offer service
> 2014-10-03 18:28:22,389 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
> 10.51.28.140:8020 starting to offer service
> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2014-10-03 18:28:22,993 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Data-node version: -55 and name-node layout version: -55
> 2014-10-03 18:28:23,008 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data1/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,019 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data10/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,028 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data11/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,037 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data2/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,039 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data3/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,047 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data4/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,056 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data5/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,058 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data6/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,066 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data7/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data8/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,085 ERROR
> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
> storage directory
> 2014-10-03 18:28:23,085 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Cannot lock storage /data9/dfs. The directory is already locked
> 2014-10-03 18:28:23,086 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Ignoring storage directory /data9/dfs due to an exception
> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
> already locked
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,820 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:23,874 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,350 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,366 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:24,367 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,367 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> 2014-10-03 18:28:24,368 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> 2014-10-03 18:28:24,469 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,470 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> <registering> (Datanode Uuid unassigned)
> 2014-10-03 18:28:24,470 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:26,481 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-10-03 18:28:26,487 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
> 10.51.28.172
> ************************************************************/
>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Pradeep Gollakota <pr...@gmail.com>.
Looks like you're facing the same problem as this SO.
http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto

Try the suggested fix.

On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
wrote:

> We had a datanode go down, and our datacenter guy swapped out the disk. We
> had moved to using UUIDs in the /etc/fstab, but he wanted to use the
> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
> issue.
>
> I am reading in the log below that the namenode has a lock on the disk? I
> don't know how that works. I thought the lockfile would belong to the
> datanode itself. How do I remove the lock from the namenode to bring the
> datanode back up?
>
> If that's not the issue, how can I bring the datanode back up? Help would
> be greatly appreciated.
>
>
>
>
> 2014-10-03 18:28:18,121 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
> 2014-05-06T19:01Z
> STARTUP_MSG:   java = 1.7.0_60
> ************************************************************/
> 2014-10-03 18:28:18,163 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-10-03 18:28:20,285 WARN
> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-10-03 18:28:20,516 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
> us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:20,518 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
> maxLockedMemory = 0
> 2014-10-03 18:28:20,557 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-10-03 18:28:20,562 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.datanode is not defined
> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50075
> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager: Using
> callQueue class java.util.concurrent.LinkedBlockingQueue
> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting Socket
> Reader #1 for port 50020
> 2014-10-03 18:28:22,269 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
> 0.0.0.0:50020
> 2014-10-03 18:28:22,295 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
> for nameservices: whprod
> 2014-10-03 18:28:22,358 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
> for nameservices: whprod
> 2014-10-03 18:28:22,369 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
> 10.51.28.141:8020 starting to offer service
> 2014-10-03 18:28:22,389 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
> 10.51.28.140:8020 starting to offer service
> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2014-10-03 18:28:22,993 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Data-node version: -55 and name-node layout version: -55
> 2014-10-03 18:28:23,008 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data1/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,019 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data10/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,028 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data11/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,037 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data2/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,039 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data3/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,047 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data4/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,056 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data5/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,058 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data6/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,066 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data7/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data8/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,085 ERROR
> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
> storage directory
> 2014-10-03 18:28:23,085 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Cannot lock storage /data9/dfs. The directory is already locked
> 2014-10-03 18:28:23,086 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Ignoring storage directory /data9/dfs due to an exception
> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
> already locked
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,820 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:23,874 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,350 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,366 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:24,367 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,367 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> 2014-10-03 18:28:24,368 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> 2014-10-03 18:28:24,469 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,470 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> <registering> (Datanode Uuid unassigned)
> 2014-10-03 18:28:24,470 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:26,481 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-10-03 18:28:26,487 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
> 10.51.28.172
> ************************************************************/
>
>

Re: datanode down, disk replaced , /etc/fstab changed. Can't bring it back up. Missing lock file?

Posted by Pradeep Gollakota <pr...@gmail.com>.
Looks like you're facing the same problem as this SO.
http://stackoverflow.com/questions/10705140/hadoop-datanode-fails-to-start-throwing-org-apache-hadoop-hdfs-server-common-sto

Try the suggested fix.

On Fri, Oct 3, 2014 at 6:57 PM, Colin Kincaid Williams <di...@uw.edu>
wrote:

> We had a datanode go down, and our datacenter guy swapped out the disk. We
> had moved to using UUIDs in the /etc/fstab, but he wanted to use the
> /dev/id format. He didn't backup the fstab, however I'm not sure that's the
> issue.
>
> I am reading in the log below that the namenode has a lock on the disk? I
> don't know how that works. I thought the lockfile would belong to the
> datanode itself. How do I remove the lock from the namenode to bring the
> datanode back up?
>
> If that's not the issue, how can I bring the datanode back up? Help would
> be greatly appreciated.
>
>
>
>
> 2014-10-03 18:28:18,121 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = us3sm2hb027r09.comp.prod.local/10.51.28.172
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.3.0-cdh5.0.1
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/avro.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/slf4j-log4j12.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/.//hadoop-annotations-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-cascading-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-javadoc.jar:/usr/lib/hadoop/.//parquet-avro.jar:/usr/lib/hadoop/.//parquet-pig.jar:/usr/lib/hadoop/.//parquet-cascading-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-pig-bundle-sources.jar:/usr/lib/hadoop/.//parquet-column-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-pig-bundle.jar:/usr/lib/hadoop/.//parquet-hadoop.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//parquet-cascading.jar:/usr/lib/hadoop/.//parquet-encoding.jar:/usr/lib/hadoop/.//parquet-column-javadoc.jar:/usr/lib/hadoop/.//parquet-generator-sources.jar:/usr/lib/hadoop/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-thrift.jar:/usr/lib/hadoop/.//parquet-format.jar:/usr/lib/hadoop/.//parquet-scrooge-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-thrift-sources.jar:/usr/lib/hadoop/.//parquet-generator.jar:/usr/lib/hadoop/.//parquet-scrooge-sources.jar:/usr/lib/hadoop/.//parquet-common.jar:/usr/lib/hadoop/.//parquet-column.jar:/usr/lib/hadoop/.//parquet-hadoop-javadoc.jar:/usr/lib/hadoop/.//parquet-format-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-sources.jar:/usr/lib/hadoop/.//hadoop-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop/.//parquet-avro-javadoc.jar:/usr/lib/hadoop/.//hadoop-common-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle.jar:/usr/lib/hadoop/.//parquet-format-sources.jar:/usr/lib/hadoop/.//parquet-encoding-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-javadoc.jar:/usr/lib/hadoop/.//parquet-common-javadoc.jar:/usr/lib/hadoop/.//parquet-common-sources.jar:/usr/lib/hadoop/.//parquet-hadoop-bundle-sources.jar:/usr/lib/hadoop/.//parquet-scrooge.jar:/usr/lib/hadoop/.//parquet-avro-sources.jar:/usr/lib/hadoop/.//parquet-encoding-sources.jar:/usr/lib/hadoop/.//parquet-test-hadoop2.jar:/usr/lib/hadoop/.//parquet-pig-javadoc.jar:/usr/lib/hadoop/.//parquet-hadoop-sources.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/avro.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//avro.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//junit-4.10.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.1-tests.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.3.0-cdh5.0.1.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.jar
> STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r
> 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on
> 2014-05-06T19:01Z
> STARTUP_MSG:   java = 1.7.0_60
> ************************************************************/
> 2014-10-03 18:28:18,163 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-10-03 18:28:20,285 WARN
> org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration:
> tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-10-03 18:28:20,511 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-10-03 18:28:20,516 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is
> us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:20,518 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with
> maxLockedMemory = 0
> 2014-10-03 18:28:20,557 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-10-03 18:28:20,562 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2014-10-03 18:28:20,769 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-10-03 18:28:20,791 INFO org.apache.hadoop.http.HttpRequestLog: Http
> request log for http.requests.datanode is not defined
> 2014-10-03 18:28:20,825 INFO org.apache.hadoop.http.HttpServer2: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-10-03 18:28:20,845 INFO org.apache.hadoop.http.HttpServer2: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-10-03 18:28:20,906 INFO org.apache.hadoop.http.HttpServer2:
> addJerseyResourcePackage:
> packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources,
> pathSpec=/webhdfs/v1/*
> 2014-10-03 18:28:20,912 INFO org.apache.hadoop.http.HttpServer2: Jetty
> bound to port 50075
> 2014-10-03 18:28:20,912 INFO org.mortbay.log: jetty-6.1.26
> 2014-10-03 18:28:21,514 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50075
> 2014-10-03 18:28:22,127 INFO org.apache.hadoop.ipc.CallQueueManager: Using
> callQueue class java.util.concurrent.LinkedBlockingQueue
> 2014-10-03 18:28:22,198 INFO org.apache.hadoop.ipc.Server: Starting Socket
> Reader #1 for port 50020
> 2014-10-03 18:28:22,269 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /
> 0.0.0.0:50020
> 2014-10-03 18:28:22,295 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received
> for nameservices: whprod
> 2014-10-03 18:28:22,358 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices
> for nameservices: whprod
> 2014-10-03 18:28:22,369 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn011r08.comp.prod.local/
> 10.51.28.141:8020 starting to offer service
> 2014-10-03 18:28:22,389 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering>
> (Datanode Uuid unassigned) service to us3sm2nn010r07.comp.prod.local/
> 10.51.28.140:8020 starting to offer service
> 2014-10-03 18:28:22,412 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2014-10-03 18:28:22,465 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2014-10-03 18:28:22,993 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Data-node version: -55 and name-node layout version: -55
> 2014-10-03 18:28:23,008 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data1/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,019 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data10/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,028 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data11/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,037 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data2/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,039 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data3/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,047 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data4/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,056 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data5/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,058 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data6/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,066 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data7/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data8/dfs/in_use.lock acquired by nodename
> 3489@us3sm2hb027r09.comp.prod.local
> 2014-10-03 18:28:23,085 ERROR
> org.apache.hadoop.hdfs.server.common.Storage: It appears that another
> namenode 3489@us3sm2hb027r09.comp.prod.local has already locked the
> storage directory
> 2014-10-03 18:28:23,085 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Cannot lock storage /data9/dfs. The directory is already locked
> 2014-10-03 18:28:23,086 WARN org.apache.hadoop.hdfs.server.common.Storage:
> Ignoring storage directory /data9/dfs due to an exception
> java.io.IOException: Cannot lock storage /data9/dfs. The directory is
> already locked
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:674)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:493)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:186)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:924)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:23,810 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,812 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,813 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,814 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,815 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,816 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:23,820 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:23,874 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,350 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Analyzing storage directories for bpid
> BP-1256332750-10.51.28.140-1408661299811
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,351 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,352 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Locking is disabled
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,353 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,354 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,355 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Restored 0 block files from trash.
> 2014-10-03 18:28:24,366 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage:
> nsid=977585766;bpid=BP-1256332750-10.51.28.140-1408661299811;lv=-55;nsInfo=lv=-55;cid=CID-1ccb02a5-bfd7-4808-a925-8e9804d40ec4;nsid=977585766;c=0;bpid=BP-1256332750-10.51.28.140-1408661299811;dnuuid=d3262e01-ef42-4e4e-abcc-064972b34a11
> 2014-10-03 18:28:24,367 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed
> volumes - current valid volumes: 10, volumes configured: 11, volumes
> failed: 1, volume failures tolerated: 0
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:200)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:936)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:895)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,367 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn010r07.comp.prod.local/10.51.28.140:8020
> 2014-10-03 18:28:24,368 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool <registering> (Datanode Uuid unassigned) service to
> us3sm2nn011r08.comp.prod.local/10.51.28.141:8020
> 2014-10-03 18:28:24,469 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:854)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:24,470 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> <registering> (Datanode Uuid unassigned)
> 2014-10-03 18:28:24,470 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but
> service not yet registered with NN
> java.lang.Exception: trace
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:856)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> at java.lang.Thread.run(Thread.java:745)
> 2014-10-03 18:28:26,481 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-10-03 18:28:26,484 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-10-03 18:28:26,487 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at us3sm2hb027r09.comp.prod.local/
> 10.51.28.172
> ************************************************************/
>
>