You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Ravi Hemnani (JIRA)" <ji...@apache.org> on 2013/12/23 16:06:51 UTC

[jira] [Created] (HADOOP-10180) Getting some error while increasing the hadoop cluster size.

Ravi Hemnani created HADOOP-10180:
-------------------------------------

             Summary: Getting some error while increasing the hadoop cluster size. 
                 Key: HADOOP-10180
                 URL: https://issues.apache.org/jira/browse/HADOOP-10180
             Project: Hadoop Common
          Issue Type: Task
            Reporter: Ravi Hemnani
            Priority: Trivial


We have a 5-node hadoop cluster and we are trying to increase the size of the cluster. We have added 2 new disks to each of the 5-boxes and we followed all the steps of putting the disk to the hadoop cluster. Everything works fine except when we restart a datanode, there are errors multiple times in the log file. Following is the error which appears in the log files, 

2013-12-23 14:32:19,406 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.200.128:50010, storageID=DS-1937554000-172.16.200.128-50010-1376068931321, infoPort=50075, ipcPort=50020):DataXceiver
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block blk_-8997395530627676954_276834 is valid, and cannot be written to.
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:1428)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:114)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:302)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
	at java.lang.Thread.run(Thread.java:724)





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)