You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Taeho Kang <tk...@gmail.com> on 2008/11/25 11:04:42 UTC

Datanode log for errors

Hi,

I have encountered some IOExceptions in Datanode, while some
intermediate/temporary map-reduce data is written to HDFS.

2008-11-25 18:27:08,070 INFO org.apache.hadoop.dfs.DataNode: writeBlock
blk_-460494523413678075 received exception java.io.IOException: Block
blk_-460494523413678075 is valid, and cannot be written to.
2008-11-25 18:27:08,070 ERROR org.apache.hadoop.dfs.DataNode:
10.31.xx.xxx:50010:DataXceiver: java.io.IOException: Block
blk_-460494523413678075 is valid, and cannot be written to.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:616)
        at
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1995)
        at
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1074)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:938)
        at java.lang.Thread.run(Thread.java:619)
It looks like one of the HDD partitons has a problem with being written to,
but the log doesn't show which partition.
Is there a way to find it out?

(Or it could be a new feature for the next version...)

Thanks in advance,

/Taeho