You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2014/04/25 05:24:15 UTC

[jira] [Commented] (HADOOP-10540) Datanode upgrade in Windows fails with hardlink error.

    [ https://issues.apache.org/jira/browse/HADOOP-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980658#comment-13980658 ] 

Hudson commented on HADOOP-10540:
---------------------------------

SUCCESS: Integrated in Hadoop-trunk-Commit #5569 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5569/])
HADOOP-10540. Datanode upgrade in Windows fails with hardlink error. (Contributed by Chris Nauroth and Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1589923)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHardLink.java


> Datanode upgrade in Windows fails with hardlink error.
> ------------------------------------------------------
>
>                 Key: HADOOP-10540
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10540
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: tools
>    Affects Versions: 2.4.0
>         Environment: Windows + JDK7. The issue was hit while upgrading from 1.x to 2.4.
>            Reporter: Huan Huang
>            Assignee: Arpit Agarwal
>             Fix For: 3.0.0, 2.5.0
>
>         Attachments: HDFS-6233.01.patch, HDFS-6233.02.patch, HDFS-6233.03.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: Upgrading storage directory d:\hadoop\data\hdfs\dn.
>    old LV = -44; old CTime = 0.
>    new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect command line arguments.
> 	at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
> 	at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
> 	at org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
> 	at org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
> 	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
> 	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
> 	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
> 	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
> 	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
> 	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
> 	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
> 	at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
> 	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
> 	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> 	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> 	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> 	at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
> 2014-04-10 22:47:12,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
> 	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:861)
> 	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
> 	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
> 	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
> 	at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:14,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2014-04-10 22:47:14,361 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
> 2014-04-10 22:47:14,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at myhost/10.0.0.1
> ************************************************************/
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)