You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Shai Erera <se...@gmail.com> on 2011/04/18 14:41:20 UTC

TestLocalDFS Fail

Hi

I've checked out Hadoop-0.20.2 from
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.2, and from
cygwin I run 'ant test-core -Dtestcase=TestLocalDFS'. The test fails.
Nothing is printed to the console, but
build/test/TEST-org.apache.hadoop.hdfs.TestLocalDFS.txt shows errors like
this:

2011-04-18 15:34:13,881 INFO  FSNamesystem.audit
(FSNamesystem.java:logAuditEvent(108)) -
ugi=haifa\shaie,Domain,Users,root,Administrators,Users,Offer,Remote,Assistance,Helpers
ip=/127.0.0.1    cmd=create    src=/user/haifa/shaie/somewhat/random.txt
dst=null    perm=haifa\shaie:supergroup:rw-r--r--
2011-04-18 15:34:13,886 INFO  hdfs.StateChange
(FSNamesystem.java:allocateBlock(1441)) - BLOCK* NameSystem.allocateBlock:
/user/haifa/shaie/somewhat/random.txt. blk_-307683559712087848_1001
2011-04-18 15:34:13,921 INFO  datanode.DataNode
(DataXceiver.java:writeBlock(228)) - Receiving block
blk_-307683559712087848_1001 src: /127.0.0.1:55335 dest: /127.0.0.1:55325
2011-04-18 15:34:13,930 INFO  datanode.DataNode
(BlockReceiver.java:lastDataNodeRun(828)) - PacketResponder
blk_-307683559712087848_1001 0 Exception java.io.IOException: could not move
files for blk_-307683559712087848_1001 from tmp to
D:\dev\hadoop\hadoop-0.20.2\build\test\data\dfs\data\data1\current\blk_-307683559712087848
    at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:104)
    at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:92)
    at
org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.addBlock(FSDataset.java:417)
    at
org.apache.hadoop.hdfs.server.datanode.FSDataset.finalizeBlock(FSDataset.java:1163)
    at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:804)
    at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:846)
    at java.lang.Thread.run(Thread.java:736)

2011-04-18 15:34:13,931 INFO  datanode.DataNode
(BlockReceiver.java:lastDataNodeRun(834)) - PacketResponder 0 for block
blk_-307683559712087848_1001 terminating
2011-04-18 15:34:13,934 WARN  hdfs.DFSClient (DFSClient.java:run(2471)) -
DFSOutputStream ResponseProcessor exception  for block
blk_-307683559712087848_1001java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:191)
    at java.io.DataInputStream.readLong(DataInputStream.java:410)
    at
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:119)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2424)

2011-04-18 15:34:13,935 WARN  hdfs.DFSClient
(DFSClient.java:processDatanodeError(2507)) - Error Recovery for block
blk_-307683559712087848_1001 bad datanode[0] 127.0.0.1:55325
2011-04-18 15:34:13,936 ERROR hdfs.DFSClient (DFSClient.java:close(1045)) -
Exception closing file /user/haifa/shaie/somewhat/random.txt :
java.io.IOException: All datanodes 127.0.0.1:55325 are bad. Aborting...
java.io.IOException: All datanodes 127.0.0.1:55325 are bad. Aborting...
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2556)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2102)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2265)

I've run into this error when I tried to run the test from eclipse, and
thought perhaps my environment is mis-configured, so tried Ant. I've also
disabled my Firewall, to no avail.

I'm trying to use MiniDFSCluster in my JUnit tests, and I run into this
error whenever I create a file.

Any ideas?

Shai