You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by EdwardKing <zh...@neusoft.com> on 2014/07/02 10:35:22 UTC
why hadoop-daemon.sh stop itself
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
just use rm -rf command inside the datanode directory
On Wed, Jul 2, 2014 at 2:33 PM, EdwardKing <zh...@neusoft.com> wrote:
> I only format namenode,I don't delete the contents for datanode
> directory, because I don't know which command to delete them. How to do it?
> Thanks.
>
> [hdfs@localhost ~]$ hdfs namenode -format
> 14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = localhost.localdomain/127.0.0.1
> STARTUP_MSG: args = [-format]
> STARTUP_MSG: version = 2.2.0
> STARTUP_MSG: classpath =
> /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r
> 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> STARTUP_MSG: java = 1.7.0_60
> ************************************************************/
> 14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers
> for [TERM, HUP, INT]
> Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
> 14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
> 14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> defaultReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplication = 512
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> minReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplicationStreams = 2
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks = false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> encryptDataTransfer = false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs
> (auth:SIMPLE)
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup =
> supergroup
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
> 14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension = 30000
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is
> enabled
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of
> total heap and retry cache entry expiry time is 600000 millis
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode
> Retry Cache
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7
> MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
> Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn
> ? (Y or N) Y
> 14/07/02 01:25:16 INFO common.Storage: Storage directory
> /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
> 14/07/02 01:25:16 INFO namenode.FSImage: Saving image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/07/02 01:25:16 INFO namenode.FSImage: Image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of
> size 196 bytes saved in 0 seconds.
> 14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
> 14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> [hdfs@localhost ~]$
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:57 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> see this error
> ava.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>
> Did you format your namenode ? after formatting the namenode did you
> delete the contents for datanode directory?
>
>
> On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I find logs,but I don't know how to do it. Thanks
>>
>> 2014-07-02 01:07:38,473 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> java.io.IOException: Incompatible clusterIDs in
>> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
>> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
>> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-07-02 01:07:38,489 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> 2014-07-02 01:07:38,601 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190)
>> 2014-07-02 01:07:40,602 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-07-02 01:07:40,610 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>> 2014-07-02 01:08:34,956 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>>
>>
>>
>> ----- Original Message -----
>> *From:* Nitin Pawar <ni...@gmail.com>
>> *To:* user@hadoop.apache.org
>> *Sent:* Wednesday, July 02, 2014 4:49 PM
>> *Subject:* Re: why hadoop-daemon.sh stop itself
>>
>> pull out the logs from datanode log file
>>
>> it will tell why it stopped
>>
>>
>> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>>
>>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>>
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4331 DataNode
>>> 4364 Jps
>>>
>>> After a while,when I use jps command,I find datanode is disappeared.Why?
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4364 Jps
>>>
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------
>>> Confidentiality Notice: The information contained in this e-mail and any
>>> accompanying attachment(s)
>>> is intended only for the use of the intended recipient and may be
>>> confidential and/or privileged of
>>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>>> reader of this communication is
>>> not the intended recipient, unauthorized use, forwarding, printing,
>>> storing, disclosure or copying
>>> is strictly prohibited, and may be unlawful.If you have received this
>>> communication in error,please
>>> immediately notify the sender by return e-mail, and delete the original
>>> message and all copies from
>>> your system. Thank you.
>>>
>>> ---------------------------------------------------------------------------------------------------
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
just use rm -rf command inside the datanode directory
On Wed, Jul 2, 2014 at 2:33 PM, EdwardKing <zh...@neusoft.com> wrote:
> I only format namenode,I don't delete the contents for datanode
> directory, because I don't know which command to delete them. How to do it?
> Thanks.
>
> [hdfs@localhost ~]$ hdfs namenode -format
> 14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = localhost.localdomain/127.0.0.1
> STARTUP_MSG: args = [-format]
> STARTUP_MSG: version = 2.2.0
> STARTUP_MSG: classpath =
> /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r
> 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> STARTUP_MSG: java = 1.7.0_60
> ************************************************************/
> 14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers
> for [TERM, HUP, INT]
> Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
> 14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
> 14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> defaultReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplication = 512
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> minReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplicationStreams = 2
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks = false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> encryptDataTransfer = false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs
> (auth:SIMPLE)
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup =
> supergroup
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
> 14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension = 30000
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is
> enabled
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of
> total heap and retry cache entry expiry time is 600000 millis
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode
> Retry Cache
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7
> MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
> Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn
> ? (Y or N) Y
> 14/07/02 01:25:16 INFO common.Storage: Storage directory
> /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
> 14/07/02 01:25:16 INFO namenode.FSImage: Saving image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/07/02 01:25:16 INFO namenode.FSImage: Image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of
> size 196 bytes saved in 0 seconds.
> 14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
> 14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> [hdfs@localhost ~]$
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:57 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> see this error
> ava.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>
> Did you format your namenode ? after formatting the namenode did you
> delete the contents for datanode directory?
>
>
> On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I find logs,but I don't know how to do it. Thanks
>>
>> 2014-07-02 01:07:38,473 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> java.io.IOException: Incompatible clusterIDs in
>> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
>> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
>> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-07-02 01:07:38,489 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> 2014-07-02 01:07:38,601 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190)
>> 2014-07-02 01:07:40,602 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-07-02 01:07:40,610 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>> 2014-07-02 01:08:34,956 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>>
>>
>>
>> ----- Original Message -----
>> *From:* Nitin Pawar <ni...@gmail.com>
>> *To:* user@hadoop.apache.org
>> *Sent:* Wednesday, July 02, 2014 4:49 PM
>> *Subject:* Re: why hadoop-daemon.sh stop itself
>>
>> pull out the logs from datanode log file
>>
>> it will tell why it stopped
>>
>>
>> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>>
>>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>>
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4331 DataNode
>>> 4364 Jps
>>>
>>> After a while,when I use jps command,I find datanode is disappeared.Why?
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4364 Jps
>>>
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------
>>> Confidentiality Notice: The information contained in this e-mail and any
>>> accompanying attachment(s)
>>> is intended only for the use of the intended recipient and may be
>>> confidential and/or privileged of
>>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>>> reader of this communication is
>>> not the intended recipient, unauthorized use, forwarding, printing,
>>> storing, disclosure or copying
>>> is strictly prohibited, and may be unlawful.If you have received this
>>> communication in error,please
>>> immediately notify the sender by return e-mail, and delete the original
>>> message and all copies from
>>> your system. Thank you.
>>>
>>> ---------------------------------------------------------------------------------------------------
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
just use rm -rf command inside the datanode directory
On Wed, Jul 2, 2014 at 2:33 PM, EdwardKing <zh...@neusoft.com> wrote:
> I only format namenode,I don't delete the contents for datanode
> directory, because I don't know which command to delete them. How to do it?
> Thanks.
>
> [hdfs@localhost ~]$ hdfs namenode -format
> 14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = localhost.localdomain/127.0.0.1
> STARTUP_MSG: args = [-format]
> STARTUP_MSG: version = 2.2.0
> STARTUP_MSG: classpath =
> /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r
> 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> STARTUP_MSG: java = 1.7.0_60
> ************************************************************/
> 14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers
> for [TERM, HUP, INT]
> Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
> 14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
> 14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> defaultReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplication = 512
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> minReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplicationStreams = 2
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks = false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> encryptDataTransfer = false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs
> (auth:SIMPLE)
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup =
> supergroup
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
> 14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension = 30000
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is
> enabled
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of
> total heap and retry cache entry expiry time is 600000 millis
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode
> Retry Cache
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7
> MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
> Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn
> ? (Y or N) Y
> 14/07/02 01:25:16 INFO common.Storage: Storage directory
> /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
> 14/07/02 01:25:16 INFO namenode.FSImage: Saving image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/07/02 01:25:16 INFO namenode.FSImage: Image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of
> size 196 bytes saved in 0 seconds.
> 14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
> 14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> [hdfs@localhost ~]$
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:57 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> see this error
> ava.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>
> Did you format your namenode ? after formatting the namenode did you
> delete the contents for datanode directory?
>
>
> On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I find logs,but I don't know how to do it. Thanks
>>
>> 2014-07-02 01:07:38,473 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> java.io.IOException: Incompatible clusterIDs in
>> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
>> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
>> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-07-02 01:07:38,489 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> 2014-07-02 01:07:38,601 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190)
>> 2014-07-02 01:07:40,602 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-07-02 01:07:40,610 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>> 2014-07-02 01:08:34,956 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>>
>>
>>
>> ----- Original Message -----
>> *From:* Nitin Pawar <ni...@gmail.com>
>> *To:* user@hadoop.apache.org
>> *Sent:* Wednesday, July 02, 2014 4:49 PM
>> *Subject:* Re: why hadoop-daemon.sh stop itself
>>
>> pull out the logs from datanode log file
>>
>> it will tell why it stopped
>>
>>
>> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>>
>>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>>
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4331 DataNode
>>> 4364 Jps
>>>
>>> After a while,when I use jps command,I find datanode is disappeared.Why?
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4364 Jps
>>>
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------
>>> Confidentiality Notice: The information contained in this e-mail and any
>>> accompanying attachment(s)
>>> is intended only for the use of the intended recipient and may be
>>> confidential and/or privileged of
>>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>>> reader of this communication is
>>> not the intended recipient, unauthorized use, forwarding, printing,
>>> storing, disclosure or copying
>>> is strictly prohibited, and may be unlawful.If you have received this
>>> communication in error,please
>>> immediately notify the sender by return e-mail, and delete the original
>>> message and all copies from
>>> your system. Thank you.
>>>
>>> ---------------------------------------------------------------------------------------------------
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
just use rm -rf command inside the datanode directory
On Wed, Jul 2, 2014 at 2:33 PM, EdwardKing <zh...@neusoft.com> wrote:
> I only format namenode,I don't delete the contents for datanode
> directory, because I don't know which command to delete them. How to do it?
> Thanks.
>
> [hdfs@localhost ~]$ hdfs namenode -format
> 14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = localhost.localdomain/127.0.0.1
> STARTUP_MSG: args = [-format]
> STARTUP_MSG: version = 2.2.0
> STARTUP_MSG: classpath =
> /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r
> 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> STARTUP_MSG: java = 1.7.0_60
> ************************************************************/
> 14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers
> for [TERM, HUP, INT]
> Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
> HostSet(
> )
> 14/07/02 01:25:11 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
> 14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
> 14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> defaultReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplication = 512
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> minReplication = 1
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> maxReplicationStreams = 2
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks = false
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/07/02 01:25:11 INFO blockmanagement.BlockManager:
> encryptDataTransfer = false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs
> (auth:SIMPLE)
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup =
> supergroup
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
> 14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
> 14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/07/02 01:25:12 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension = 30000
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is
> enabled
> 14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of
> total heap and retry cache entry expiry time is 600000 millis
> 14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode
> Retry Cache
> 14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
> 14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7
> MB
> 14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
> Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn
> ? (Y or N) Y
> 14/07/02 01:25:16 INFO common.Storage: Storage directory
> /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
> 14/07/02 01:25:16 INFO namenode.FSImage: Saving image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/07/02 01:25:16 INFO namenode.FSImage: Image file
> /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of
> size 196 bytes saved in 0 seconds.
> 14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
> 14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> [hdfs@localhost ~]$
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:57 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> see this error
> ava.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>
> Did you format your namenode ? after formatting the namenode did you
> delete the contents for datanode directory?
>
>
> On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I find logs,but I don't know how to do it. Thanks
>>
>> 2014-07-02 01:07:38,473 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> java.io.IOException: Incompatible clusterIDs in
>> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
>> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
>> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> at
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> at java.lang.Thread.run(Thread.java:745)
>> 2014-07-02 01:07:38,489 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
>> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
>> 127.0.0.1:9000
>> 2014-07-02 01:07:38,601 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> BP-279671289-127.0.0.1-1404285849267 (storage id
>> DS-601761441-127.0.0.1-50010-1404205370190)
>> 2014-07-02 01:07:40,602 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 0
>> 2014-07-02 01:07:40,610 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
>> ************************************************************/
>> 2014-07-02 01:08:34,956 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
>> /************************************************************
>>
>>
>>
>> ----- Original Message -----
>> *From:* Nitin Pawar <ni...@gmail.com>
>> *To:* user@hadoop.apache.org
>> *Sent:* Wednesday, July 02, 2014 4:49 PM
>> *Subject:* Re: why hadoop-daemon.sh stop itself
>>
>> pull out the logs from datanode log file
>>
>> it will tell why it stopped
>>
>>
>> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>>
>>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>>
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4331 DataNode
>>> 4364 Jps
>>>
>>> After a while,when I use jps command,I find datanode is disappeared.Why?
>>> [hdfs@localhost logs]$ jps
>>> 4135 NameNode
>>> 4270 SecondaryNameNode
>>> 4364 Jps
>>>
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------
>>> Confidentiality Notice: The information contained in this e-mail and any
>>> accompanying attachment(s)
>>> is intended only for the use of the intended recipient and may be
>>> confidential and/or privileged of
>>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>>> reader of this communication is
>>> not the intended recipient, unauthorized use, forwarding, printing,
>>> storing, disclosure or copying
>>> is strictly prohibited, and may be unlawful.If you have received this
>>> communication in error,please
>>> immediately notify the sender by return e-mail, and delete the original
>>> message and all copies from
>>> your system. Thank you.
>>>
>>> ---------------------------------------------------------------------------------------------------
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I only format namenode,I don't delete the contents for datanode directory, because I don't know which command to delete them. How to do it? Thanks.
[hdfs@localhost ~]$ hdfs namenode -format
14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_60
************************************************************/
14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/07/02 01:25:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/07/02 01:25:11 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: defaultReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplication = 512
14/07/02 01:25:11 INFO blockmanagement.BlockManager: minReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/07/02 01:25:11 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/07/02 01:25:11 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup = supergroup
14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn ? (Y or N) Y
14/07/02 01:25:16 INFO common.Storage: Storage directory /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
14/07/02 01:25:16 INFO namenode.FSImage: Saving image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 using no compression
14/07/02 01:25:16 INFO namenode.FSImage: Image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hdfs@localhost ~]$
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:57 PM
Subject: Re: why hadoop-daemon.sh stop itself
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I only format namenode,I don't delete the contents for datanode directory, because I don't know which command to delete them. How to do it? Thanks.
[hdfs@localhost ~]$ hdfs namenode -format
14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_60
************************************************************/
14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/07/02 01:25:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/07/02 01:25:11 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: defaultReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplication = 512
14/07/02 01:25:11 INFO blockmanagement.BlockManager: minReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/07/02 01:25:11 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/07/02 01:25:11 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup = supergroup
14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn ? (Y or N) Y
14/07/02 01:25:16 INFO common.Storage: Storage directory /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
14/07/02 01:25:16 INFO namenode.FSImage: Saving image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 using no compression
14/07/02 01:25:16 INFO namenode.FSImage: Image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hdfs@localhost ~]$
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:57 PM
Subject: Re: why hadoop-daemon.sh stop itself
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I only format namenode,I don't delete the contents for datanode directory, because I don't know which command to delete them. How to do it? Thanks.
[hdfs@localhost ~]$ hdfs namenode -format
14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_60
************************************************************/
14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/07/02 01:25:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/07/02 01:25:11 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: defaultReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplication = 512
14/07/02 01:25:11 INFO blockmanagement.BlockManager: minReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/07/02 01:25:11 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/07/02 01:25:11 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup = supergroup
14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn ? (Y or N) Y
14/07/02 01:25:16 INFO common.Storage: Storage directory /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
14/07/02 01:25:16 INFO namenode.FSImage: Saving image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 using no compression
14/07/02 01:25:16 INFO namenode.FSImage: Image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hdfs@localhost ~]$
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:57 PM
Subject: Re: why hadoop-daemon.sh stop itself
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I only format namenode,I don't delete the contents for datanode directory, because I don't know which command to delete them. How to do it? Thanks.
[hdfs@localhost ~]$ hdfs namenode -format
14/07/02 01:25:10 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /home/yarn/hadoop-2.2.0/etc/hadoop:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/yarn/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_60
************************************************************/
14/07/02 01:25:10 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-796495a5-7e08-40f0-baf1-be4fdb656a25
14/07/02 01:25:11 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/07/02 01:25:11 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/07/02 01:25:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/07/02 01:25:11 INFO util.GSet: Computing capacity for map BlocksMap
14/07/02 01:25:11 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:11 INFO util.GSet: 2.0% max memory = 386.7 MB
14/07/02 01:25:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/07/02 01:25:11 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: defaultReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplication = 512
14/07/02 01:25:11 INFO blockmanagement.BlockManager: minReplication = 1
14/07/02 01:25:11 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/07/02 01:25:11 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/07/02 01:25:11 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/07/02 01:25:11 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/07/02 01:25:11 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
14/07/02 01:25:11 INFO namenode.FSNamesystem: supergroup = supergroup
14/07/02 01:25:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/07/02 01:25:11 INFO namenode.FSNamesystem: HA Enabled: false
14/07/02 01:25:11 INFO namenode.FSNamesystem: Append Enabled: true
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map INodeMap
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 1.0% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/07/02 01:25:12 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/07/02 01:25:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/07/02 01:25:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/07/02 01:25:12 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/07/02 01:25:12 INFO util.GSet: VM type = 32-bit
14/07/02 01:25:12 INFO util.GSet: 0.029999999329447746% max memory = 386.7 MB
14/07/02 01:25:12 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/yarn/hadoop-2.2.0/hdfs/nn ? (Y or N) Y
14/07/02 01:25:16 INFO common.Storage: Storage directory /home/yarn/hadoop-2.2.0/hdfs/nn has been successfully formatted.
14/07/02 01:25:16 INFO namenode.FSImage: Saving image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 using no compression
14/07/02 01:25:16 INFO namenode.FSImage: Image file /home/yarn/hadoop-2.2.0/hdfs/nn/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/07/02 01:25:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/07/02 01:25:16 INFO util.ExitUtil: Exiting with status 0
14/07/02 01:25:16 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hdfs@localhost ~]$
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:57 PM
Subject: Re: why hadoop-daemon.sh stop itself
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn:
namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode
clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete
the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
> I find logs,but I don't know how to do it. Thanks
>
> 2014-07-02 01:07:38,473 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> java.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> at java.lang.Thread.run(Thread.java:745)
> 2014-07-02 01:07:38,489 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> 2014-07-02 01:07:38,601 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190)
> 2014-07-02 01:07:40,602 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-07-02 01:07:40,610 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> 2014-07-02 01:08:34,956 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
>
>
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:49 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> pull out the logs from datanode log file
>
> it will tell why it stopped
>
>
> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>
>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4331 DataNode
>> 4364 Jps
>>
>> After a while,when I use jps command,I find datanode is disappeared.Why?
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4364 Jps
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn:
namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode
clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete
the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
> I find logs,but I don't know how to do it. Thanks
>
> 2014-07-02 01:07:38,473 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> java.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> at java.lang.Thread.run(Thread.java:745)
> 2014-07-02 01:07:38,489 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> 2014-07-02 01:07:38,601 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190)
> 2014-07-02 01:07:40,602 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-07-02 01:07:40,610 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> 2014-07-02 01:08:34,956 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
>
>
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:49 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> pull out the logs from datanode log file
>
> it will tell why it stopped
>
>
> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>
>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4331 DataNode
>> 4364 Jps
>>
>> After a while,when I use jps command,I find datanode is disappeared.Why?
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4364 Jps
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn:
namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode
clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete
the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
> I find logs,but I don't know how to do it. Thanks
>
> 2014-07-02 01:07:38,473 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> java.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> at java.lang.Thread.run(Thread.java:745)
> 2014-07-02 01:07:38,489 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> 2014-07-02 01:07:38,601 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190)
> 2014-07-02 01:07:40,602 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-07-02 01:07:40,610 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> 2014-07-02 01:08:34,956 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
>
>
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:49 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> pull out the logs from datanode log file
>
> it will tell why it stopped
>
>
> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>
>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4331 DataNode
>> 4364 Jps
>>
>> After a while,when I use jps command,I find datanode is disappeared.Why?
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4364 Jps
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
see this error
ava.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn:
namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode
clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
Did you format your namenode ? after formatting the namenode did you delete
the contents for datanode directory?
On Wed, Jul 2, 2014 at 2:24 PM, EdwardKing <zh...@neusoft.com> wrote:
> I find logs,but I don't know how to do it. Thanks
>
> 2014-07-02 01:07:38,473 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> java.io.IOException: Incompatible clusterIDs in
> /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID =
> CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID =
> CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> at java.lang.Thread.run(Thread.java:745)
> 2014-07-02 01:07:38,489 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/
> 127.0.0.1:9000
> 2014-07-02 01:07:38,601 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-279671289-127.0.0.1-1404285849267 (storage id
> DS-601761441-127.0.0.1-50010-1404205370190)
> 2014-07-02 01:07:40,602 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2014-07-02 01:07:40,610 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
> ************************************************************/
> 2014-07-02 01:08:34,956 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
>
>
>
> ----- Original Message -----
> *From:* Nitin Pawar <ni...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, July 02, 2014 4:49 PM
> *Subject:* Re: why hadoop-daemon.sh stop itself
>
> pull out the logs from datanode log file
>
> it will tell why it stopped
>
>
> On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
>
>> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>>
>> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
>> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4331 DataNode
>> 4364 Jps
>>
>> After a while,when I use jps command,I find datanode is disappeared.Why?
>> [hdfs@localhost logs]$ jps
>> 4135 NameNode
>> 4270 SecondaryNameNode
>> 4364 Jps
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
> Nitin Pawar
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by EdwardKing <zh...@neusoft.com>.
I find logs,but I don't know how to do it. Thanks
2014-07-02 01:07:38,473 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
java.io.IOException: Incompatible clusterIDs in /home/yarn/hadoop-2.2.0/hdfs/dn: namenode clusterID = CID-c91ccd10-8ea0-4fb3-9037-d5f57694674e; datanode clusterID = CID-89e2e0b8-2d61-4d6a-9424-ab46e4f83cab
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:745)
2014-07-02 01:07:38,489 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190) service to localhost/127.0.0.1:9000
2014-07-02 01:07:38,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-279671289-127.0.0.1-1404285849267 (storage id DS-601761441-127.0.0.1-50010-1404205370190)
2014-07-02 01:07:40,602 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-07-02 01:07:40,606 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-07-02 01:07:40,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
************************************************************/
2014-07-02 01:08:34,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
----- Original Message -----
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4331 DataNode
4364 Jps
After a while,when I use jps command,I find datanode is disappeared.Why?
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameNode
4364 Jps
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
--
Nitin Pawar
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>
> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4331 DataNode
> 4364 Jps
>
> After a while,when I use jps command,I find datanode is disappeared.Why?
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4364 Jps
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>
> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4331 DataNode
> 4364 Jps
>
> After a while,when I use jps command,I find datanode is disappeared.Why?
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4364 Jps
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>
> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4331 DataNode
> 4364 Jps
>
> After a while,when I use jps command,I find datanode is disappeared.Why?
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4364 Jps
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar
Re: why hadoop-daemon.sh stop itself
Posted by Nitin Pawar <ni...@gmail.com>.
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing <zh...@neusoft.com> wrote:
> I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
>
> [hdfs@localhost logs]$ hadoop-daemon.sh start namenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
> [hdfs@localhost logs]$ hadoop-daemon.sh start datanode
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4331 DataNode
> 4364 Jps
>
> After a while,when I use jps command,I find datanode is disappeared.Why?
> [hdfs@localhost logs]$ jps
> 4135 NameNode
> 4270 SecondaryNameNode
> 4364 Jps
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
--
Nitin Pawar