You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by cho ju il <tj...@kgrid.co.kr> on 2014/09/15 08:14:56 UTC

file status is "openforwrite"

hadoop version 1.1.2
 
hadoop upload fail ( /hdfs/20140722/13186104.0 ).
last block did not "addStoredBlock".
file did not "finalizeINodeFileUnderConstruction", "completeFileInternal".
if i run "fsck" tool, status is "OPENFORWRITE"
 
what happened in the upload process? 
how can i close a file?
 
 
 
 
 
##### Namenode, trace the upload file
2014-07-22 20:36:37,709 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1824)) - BLOCK* NameSystem.allocateBlock: /hdfs/20140722/13186104.0. blk_6104808573660656227_13249243
2014-07-22 20:37:04,269 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3846)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.1.1.32:40010 is added to blk_6104808573660656227_13249243 size 134217728
2014-07-22 20:37:04,280 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1824)) - BLOCK* NameSystem.allocateBlock: /hdfs/20140722/13186104.0. blk_3817848998556822201_13249250
2014-07-22 20:37:31,232 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3846)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.1.1.32:40010 is added to blk_3817848998556822201_13249250 size 134217728
2014-07-22 20:37:31,240 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1824)) - BLOCK* NameSystem.allocateBlock: /hdfs/20140722/13186104.0. blk_8022029517227763004_13249254
------&gt;&gt;&gt; addStoredBlock ???
------&gt;&gt;&gt; finalizeINodeFileUnderConstruction  ???
------&gt;&gt;&gt; completeFileInternal  ??? 
 
 
 
##### Datanode, warn log
014-07-22 20:38:43,224 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run():
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
        at sun.nio.ch.IOUtil.write(IOUtil.java:40)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
        at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
        at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
        at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:135)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:939)
        at java.lang.Thread.run(Thread.java:662)
2014-07-22 20:38:43,224 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.1.32:40010, storageID=DS-1712122094-127.0.0.1-40010-1371813045878, infoPort=40075, ipcPort=40020):DataXceiver
java.nio.channels.ClosedByInterruptException
        at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
        at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:292)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:339)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:403)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:581)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
        at java.lang.Thread.run(Thread.java:662)
2014-07-22 20:39:25,957 INFO org.apache.hadoop.hdfs.server.datano
 
 
##### DFSClient library, error log
2014-07-22 20:38:42,456 DFSOutputStream ResponseProcessor exception  for block blk_8022029517227763004_13249254java.net.SocketTimeoutException: 63000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.1.1.32:60088 remote=/10.1.1.32:40010]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at java.io.DataInputStream.readLong(DataInputStream.java:399)
        at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3127)
 
2014-07-22 20:38:43,223 Error Recovery for block blk_8022029517227763004_13249254 bad datanode[0] 10.1.1.32:40010