You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Emmanuel JOKE <jo...@gmail.com> on 2007/06/24 14:30:38 UTC

DataXCeiver error ?

Hi Guys,

I run a cluster of 2 machine on Linux 2.6 and Java 1.6 and i keep saying
this kind of error in the slave datanode.

FIRST ERROR:
2007-06-24 08:25:22,688 ERROR dfs.DataNode - DataXCeiver
java.io.IOException: Block blk_674889550290164539 has already been started
(though not completed), and thus cannot be created.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:507)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(
DataNode.java:767)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:596)
        at java.lang.Thread.run(Thread.java:619)

SECOND ERROR:
2007-06-24 08:25:34,227 ERROR dfs.DataNode - DataXCeiver
java.io.IOException: Block blk_674889550290164539 is valid, and cannot be
written to.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:491)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(
DataNode.java:767)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:596)
        at java.lang.Thread.run(Thread.java:619)

It doesn't look to affect my crawler but i'm wondering if it could affect
the performance.
Is it normal  or do I have done something wrong ?

E