You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.zones.apache.org> on 2009/08/22 19:27:00 UTC

Build failed in Hudson: Hadoop-Hdfs-trunk #58

See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/58/changes

Changes:

[cdouglas] HDFS-538. Per the contract elucidated in HADOOP-6201, throw
FileNotFoundException from FileSystem::listStatus rather than returning
null. Contributed by Jakob Homan.

------------------------------------------
[...truncated 222912 lines...]
    [junit] 2009-08-22 17:25:53,625 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-22 17:25:53,626 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 39115 webServer.getConnectors()[0].getLocalPort() returned 39115
    [junit] 2009-08-22 17:25:53,626 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 39115
    [junit] 2009-08-22 17:25:53,626 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-22 17:25:53,686 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:39115
    [junit] 2009-08-22 17:25:53,686 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-22 17:25:53,687 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=44254
    [junit] 2009-08-22 17:25:53,688 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-22 17:25:53,688 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 44254: starting
    [junit] 2009-08-22 17:25:53,688 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:50408, storageID=, infoPort=39115, ipcPort=44254)
    [junit] 2009-08-22 17:25:53,688 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 44254: starting
    [junit] 2009-08-22 17:25:53,690 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50408 storage DS-881213117-67.195.138.9-50408-1250961953689
    [junit] 2009-08-22 17:25:53,690 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:50408
    [junit] 2009-08-22 17:25:53,746 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-881213117-67.195.138.9-50408-1250961953689 is assigned to data-node 127.0.0.1:50408
    [junit] 2009-08-22 17:25:53,746 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:50408, storageID=DS-881213117-67.195.138.9-50408-1250961953689, infoPort=39115, ipcPort=44254)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4 
    [junit] 2009-08-22 17:25:53,747 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-22 17:25:53,755 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-08-22 17:25:53,755 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-22 17:25:53,785 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-22 17:25:53,785 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-22 17:25:53,942 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-08-22 17:25:53,942 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-22 17:25:54,233 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-22 17:25:54,234 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 34356
    [junit] 2009-08-22 17:25:54,234 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-22 17:25:54,235 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1250982862235 with interval 21600000
    [junit] 2009-08-22 17:25:54,236 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-22 17:25:54,236 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 50526 webServer.getConnectors()[0].getLocalPort() returned 50526
    [junit] 2009-08-22 17:25:54,237 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 50526
    [junit] 2009-08-22 17:25:54,237 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-22 17:25:54,297 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:50526
    [junit] 2009-08-22 17:25:54,298 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-22 17:25:54,299 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=60353
    [junit] 2009-08-22 17:25:54,300 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-22 17:25:54,300 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:34356, storageID=, infoPort=50526, ipcPort=60353)
    [junit] 2009-08-22 17:25:54,300 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 60353: starting
    [junit] 2009-08-22 17:25:54,300 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 60353: starting
    [junit] 2009-08-22 17:25:54,302 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34356 storage DS-259311802-67.195.138.9-34356-1250961954301
    [junit] 2009-08-22 17:25:54,302 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:34356
    [junit] 2009-08-22 17:25:54,343 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-259311802-67.195.138.9-34356-1250961954301 is assigned to data-node 127.0.0.1:34356
    [junit] 2009-08-22 17:25:54,344 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:34356, storageID=DS-259311802-67.195.138.9-34356-1250961954301, infoPort=50526, ipcPort=60353)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6 
    [junit] 2009-08-22 17:25:54,344 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-22 17:25:54,351 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5  is not formatted.
    [junit] 2009-08-22 17:25:54,352 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-22 17:25:54,382 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-22 17:25:54,383 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-22 17:25:54,549 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6  is not formatted.
    [junit] 2009-08-22 17:25:54,549 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-22 17:25:54,819 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-22 17:25:54,820 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 38414
    [junit] 2009-08-22 17:25:54,820 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-22 17:25:54,820 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1250963141820 with interval 21600000
    [junit] 2009-08-22 17:25:54,822 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-22 17:25:54,822 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 41030 webServer.getConnectors()[0].getLocalPort() returned 41030
    [junit] 2009-08-22 17:25:54,822 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 41030
    [junit] 2009-08-22 17:25:54,822 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-22 17:25:54,882 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:41030
    [junit] 2009-08-22 17:25:54,883 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-22 17:25:54,884 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=53124
    [junit] 2009-08-22 17:25:54,957 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 53124: starting
    [junit] 2009-08-22 17:25:54,957 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:38414, storageID=, infoPort=41030, ipcPort=53124)
    [junit] 2009-08-22 17:25:54,957 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-22 17:25:54,958 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 53124: starting
    [junit] 2009-08-22 17:25:54,959 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:38414 storage DS-2029722745-67.195.138.9-38414-1250961954958
    [junit] 2009-08-22 17:25:54,959 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,003 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-2029722745-67.195.138.9-38414-1250961954958 is assigned to data-node 127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,003 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:38414, storageID=DS-2029722745-67.195.138.9-38414-1250961954958, infoPort=41030, ipcPort=53124)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-22 17:25:55,004 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-22 17:25:55,070 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-22 17:25:55,071 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-22 17:25:55,178 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/pipeline_Fi_16/foo	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-22 17:25:55,181 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_-678847007047035635_1001
    [junit] 2009-08-22 17:25:55,211 INFO  protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35)) - FI: addBlock Pipeline[127.0.0.1:50408, 127.0.0.1:38414, 127.0.0.1:34356]
    [junit] 2009-08-22 17:25:55,212 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,212 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,213 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-678847007047035635_1001 src: /127.0.0.1:56770 dest: /127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,214 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,214 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,214 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-678847007047035635_1001 src: /127.0.0.1:41695 dest: /127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,215 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,216 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,216 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-678847007047035635_1001 src: /127.0.0.1:43698 dest: /127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,216 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,217 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,219 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,219 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(151)) - FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,219 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,219 WARN  datanode.DataNode (DataNode.java:checkDiskError(702)) - checkDiskError: exception: 
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-22 17:25:55,220 INFO  mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current 
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block blk_-678847007047035635_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block blk_-678847007047035635_1001 Interrupted.
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-678847007047035635_1001 terminating
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-678847007047035635_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,221 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:34356, storageID=DS-259311802-67.195.138.9-34356-1250961954301, infoPort=50526, ipcPort=60353):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:run(917)) - PacketResponder blk_-678847007047035635_1001 1 Exception java.io.EOFException
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,223 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-678847007047035635_1001 terminating
    [junit] 2009-08-22 17:25:55,265 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 2 for block blk_-678847007047035635_1001 terminating
    [junit] 2009-08-22 17:25:55,265 WARN  hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception  for block blk_-678847007047035635_1001java.io.IOException: Bad response ERROR for block blk_-678847007047035635_1001 from datanode 127.0.0.1:34356
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
    [junit] 
    [junit] 2009-08-22 17:25:55,265 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2630)) - Error Recovery for block blk_-678847007047035635_1001 bad datanode[2] 127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,265 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2674)) - Error Recovery for block blk_-678847007047035635_1001 in pipeline 127.0.0.1:50408, 127.0.0.1:38414, 127.0.0.1:34356: bad datanode 127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,268 INFO  datanode.DataNode (DataNode.java:logRecoverBlock(1727)) - Client calls recoverBlock(block=blk_-678847007047035635_1001, targets=[127.0.0.1:50408, 127.0.0.1:38414])
    [junit] 2009-08-22 17:25:55,272 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-678847007047035635_1001(length=1), newblock=blk_-678847007047035635_1002(length=1), datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,273 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-678847007047035635_1001(length=1), newblock=blk_-678847007047035635_1002(length=1), datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,274 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-678847007047035635_1001, newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:50408, 127.0.0.1:38414], closeFile=false, deleteBlock=false)
    [junit] 2009-08-22 17:25:55,274 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-678847007047035635_1002) successful
    [junit] 2009-08-22 17:25:55,275 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,275 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-678847007047035635_1002 src: /127.0.0.1:56775 dest: /127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-678847007047035635_1002
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,277 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-678847007047035635_1002 src: /127.0.0.1:41700 dest: /127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,277 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-678847007047035635_1002
    [junit] 2009-08-22 17:25:55,277 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,280 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:41700, dest: /127.0.0.1:38414, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-2085369977, offset: 0, srvID: DS-2029722745-67.195.138.9-38414-1250961954958, blockid: blk_-678847007047035635_1002, duration: 2788947
    [junit] 2009-08-22 17:25:55,281 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:38414 is added to blk_-678847007047035635_1002 size 1
    [junit] 2009-08-22 17:25:55,282 INFO  DataNode.clienttrace (BlockReceiver.java:run(955)) - src: /127.0.0.1:56775, dest: /127.0.0.1:50408, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-2085369977, offset: 0, srvID: DS-881213117-67.195.138.9-50408-1250961953689, blockid: blk_-678847007047035635_1002, duration: 3449887
    [junit] 2009-08-22 17:25:55,281 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-678847007047035635_1002 terminating
    [junit] 2009-08-22 17:25:55,283 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50408 is added to blk_-678847007047035635_1002 size 1
    [junit] 2009-08-22 17:25:55,283 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-678847007047035635_1002 terminating
    [junit] 2009-08-22 17:25:55,285 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_-2085369977
    [junit] 2009-08-22 17:25:55,294 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=open	src=/pipeline_Fi_16/foo	dst=null	perm=null
    [junit] 2009-08-22 17:25:55,296 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:38414
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-22 17:25:55,297 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:38414, dest: /127.0.0.1:41701, bytes: 5, op: HDFS_READ, cliID: DFSClient_-2085369977, offset: 0, srvID: DS-2029722745-67.195.138.9-38414-1250961954958, blockid: blk_-678847007047035635_1002, duration: 237979
    [junit] 2009-08-22 17:25:55,299 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,399 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 53124
    [junit] 2009-08-22 17:25:55,400 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 53124: exiting
    [junit] 2009-08-22 17:25:55,400 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-22 17:25:55,400 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:38414, storageID=DS-2029722745-67.195.138.9-38414-1250961954958, infoPort=41030, ipcPort=53124):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,400 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-22 17:25:55,400 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 53124
    [junit] 2009-08-22 17:25:55,401 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-22 17:25:55,402 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:38414, storageID=DS-2029722745-67.195.138.9-38414-1250961954958, infoPort=41030, ipcPort=53124):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-22 17:25:55,402 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 53124
    [junit] 2009-08-22 17:25:55,402 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-22 17:25:55,504 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 60353
    [junit] 2009-08-22 17:25:55,504 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 60353: exiting
    [junit] 2009-08-22 17:25:55,504 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 60353
    [junit] 2009-08-22 17:25:55,505 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:34356, storageID=DS-259311802-67.195.138.9-34356-1250961954301, infoPort=50526, ipcPort=60353):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,505 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-22 17:25:55,505 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-22 17:25:55,505 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-22 17:25:55,505 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:34356, storageID=DS-259311802-67.195.138.9-34356-1250961954301, infoPort=50526, ipcPort=60353):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] 2009-08-22 17:25:55,506 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 60353
    [junit] 2009-08-22 17:25:55,506 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-22 17:25:55,608 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 44254
    [junit] 2009-08-22 17:25:55,608 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 44254: exiting
    [junit] 2009-08-22 17:25:55,609 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 44254
    [junit] 2009-08-22 17:25:55,609 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-22 17:25:55,609 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:50408, storageID=DS-881213117-67.195.138.9-50408-1250961953689, infoPort=39115, ipcPort=44254):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,609 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-22 17:25:55,611 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-22 17:25:55,611 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-22 17:25:55,612 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:50408, storageID=DS-881213117-67.195.138.9-50408-1250961953689, infoPort=39115, ipcPort=44254):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] 2009-08-22 17:25:55,612 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 44254
    [junit] 2009-08-22 17:25:55,612 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-22 17:25:55,736 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-22 17:25:55,736 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 36 34 
    [junit] 2009-08-22 17:25:55,736 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-22 17:25:55,746 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 42930
    [junit] 2009-08-22 17:25:55,746 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 42930: exiting
    [junit] 2009-08-22 17:25:55,746 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 42930: exiting
    [junit] 2009-08-22 17:25:55,748 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 42930: exiting
    [junit] 2009-08-22 17:25:55,748 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 42930
    [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 79.161 sec

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed!

Total time: 68 minutes 15 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-Hdfs-trunk #61

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/61/changes

Changes:

[hairong] HDFS-553. BlockSender reports wrong failed position in ChecksumException. Contributed by Hairong Kuang.

[szetszwo] HDFS-561. Fix write pipeline READ_TIMEOUT in DataTransferProtocol.  Contributed by Kan Zhang

[szetszwo] HDFS-549. Allow a non-fault-inject test, which is specified by -Dtestcase, to be executed by the run-test-hdfs-fault-inject target.  Contributed by Konstantin Boudnik

------------------------------------------
[...truncated 221189 lines...]
    [junit] 2009-08-25 16:23:50,981 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-25 16:23:51,037 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:53933
    [junit] 2009-08-25 16:23:51,038 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-25 16:23:51,039 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36715
    [junit] 2009-08-25 16:23:51,040 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-25 16:23:51,040 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 36715: starting
    [junit] 2009-08-25 16:23:51,040 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:56241, storageID=, infoPort=53933, ipcPort=36715)
    [junit] 2009-08-25 16:23:51,040 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 36715: starting
    [junit] 2009-08-25 16:23:51,041 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:56241 storage DS-98040998-67.195.138.9-56241-1251217431041
    [junit] 2009-08-25 16:23:51,042 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:56241
    [junit] 2009-08-25 16:23:51,086 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-98040998-67.195.138.9-56241-1251217431041 is assigned to data-node 127.0.0.1:56241
    [junit] 2009-08-25 16:23:51,087 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:56241, storageID=DS-98040998-67.195.138.9-56241-1251217431041, infoPort=53933, ipcPort=36715)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4 
    [junit] 2009-08-25 16:23:51,087 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-25 16:23:51,096 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-08-25 16:23:51,097 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-25 16:23:51,123 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-25 16:23:51,124 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-25 16:23:51,304 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-08-25 16:23:51,304 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-25 16:23:51,565 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-25 16:23:51,565 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 44609
    [junit] 2009-08-25 16:23:51,566 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-25 16:23:51,566 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251238139566 with interval 21600000
    [junit] 2009-08-25 16:23:51,567 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-25 16:23:51,567 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 51589 webServer.getConnectors()[0].getLocalPort() returned 51589
    [junit] 2009-08-25 16:23:51,568 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 51589
    [junit] 2009-08-25 16:23:51,568 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-25 16:23:51,634 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:51589
    [junit] 2009-08-25 16:23:51,634 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-25 16:23:51,635 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=33485
    [junit] 2009-08-25 16:23:51,636 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-25 16:23:51,636 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:44609, storageID=, infoPort=51589, ipcPort=33485)
    [junit] 2009-08-25 16:23:51,636 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 33485: starting
    [junit] 2009-08-25 16:23:51,636 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 33485: starting
    [junit] 2009-08-25 16:23:51,638 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:44609 storage DS-1943273741-67.195.138.9-44609-1251217431637
    [junit] 2009-08-25 16:23:51,638 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:44609
    [junit] 2009-08-25 16:23:51,680 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1943273741-67.195.138.9-44609-1251217431637 is assigned to data-node 127.0.0.1:44609
    [junit] 2009-08-25 16:23:51,680 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6 
    [junit] 2009-08-25 16:23:51,681 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-25 16:23:51,683 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5  is not formatted.
    [junit] 2009-08-25 16:23:51,683 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-25 16:23:51,716 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-25 16:23:51,717 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-25 16:23:51,859 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6  is not formatted.
    [junit] 2009-08-25 16:23:51,860 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-25 16:23:52,125 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-25 16:23:52,126 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 53608
    [junit] 2009-08-25 16:23:52,127 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-25 16:23:52,127 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251231381127 with interval 21600000
    [junit] 2009-08-25 16:23:52,128 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-25 16:23:52,129 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 56130 webServer.getConnectors()[0].getLocalPort() returned 56130
    [junit] 2009-08-25 16:23:52,129 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 56130
    [junit] 2009-08-25 16:23:52,129 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-25 16:23:52,185 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:56130
    [junit] 2009-08-25 16:23:52,186 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-25 16:23:52,187 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36493
    [junit] 2009-08-25 16:23:52,187 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-25 16:23:52,188 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:53608, storageID=, infoPort=56130, ipcPort=36493)
    [junit] 2009-08-25 16:23:52,188 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 36493: starting
    [junit] 2009-08-25 16:23:52,188 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 36493: starting
    [junit] 2009-08-25 16:23:52,190 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:53608 storage DS-651771013-67.195.138.9-53608-1251217432189
    [junit] 2009-08-25 16:23:52,190 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,240 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-651771013-67.195.138.9-53608-1251217432189 is assigned to data-node 127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,241 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:53608, storageID=DS-651771013-67.195.138.9-53608-1251217432189, infoPort=56130, ipcPort=36493)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-25 16:23:52,241 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-25 16:23:52,279 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-25 16:23:52,279 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-25 16:23:52,421 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/pipeline_Fi_16/foo	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-25 16:23:52,424 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_-7389658038059638367_1001
    [junit] 2009-08-25 16:23:52,424 INFO  protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35)) - FI: addBlock Pipeline[127.0.0.1:53608, 127.0.0.1:56241, 127.0.0.1:44609]
    [junit] 2009-08-25 16:23:52,425 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,426 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-25 16:23:52,426 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1001 src: /127.0.0.1:59613 dest: /127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,427 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:56241
    [junit] 2009-08-25 16:23:52,427 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-25 16:23:52,427 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1001 src: /127.0.0.1:50548 dest: /127.0.0.1:56241
    [junit] 2009-08-25 16:23:52,428 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,428 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-25 16:23:52,429 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1001 src: /127.0.0.1:45529 dest: /127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,429 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:56241
    [junit] 2009-08-25 16:23:52,429 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,430 INFO  hdfs.DFSClientAspects (DFSClientAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_DFSClientAspects$2$9396d2df(47)) - FI: after pipelineInitNonAppend: hasError=false errorIndex=0
    [junit] 2009-08-25 16:23:52,430 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(170)) - FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,431 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,431 WARN  datanode.DataNode (DataNode.java:checkDiskError(702)) - checkDiskError: exception: 
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:171)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:340)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-25 16:23:52,432 INFO  mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current 
    [junit] 2009-08-25 16:23:52,432 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block blk_-7389658038059638367_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,432 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block blk_-7389658038059638367_1001 Interrupted.
    [junit] 2009-08-25 16:23:52,432 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-7389658038059638367_1001 terminating
    [junit] 2009-08-25 16:23:52,453 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(359)) - writeBlock blk_-7389658038059638367_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,453 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:171)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:340)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-25 16:23:52,453 INFO  datanode.DataNode (BlockReceiver.java:run(917)) - PacketResponder blk_-7389658038059638367_1001 1 Exception java.io.EOFException
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-25 16:23:52,478 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-7389658038059638367_1001 terminating
    [junit] 2009-08-25 16:23:52,479 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 2 for block blk_-7389658038059638367_1001 terminating
    [junit] 2009-08-25 16:23:52,479 WARN  hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception  for block blk_-7389658038059638367_1001java.io.IOException: Bad response ERROR for block blk_-7389658038059638367_1001 from datanode 127.0.0.1:44609
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
    [junit] 
    [junit] 2009-08-25 16:23:52,479 INFO  hdfs.DFSClientAspects (DFSClientAspects.aj:ajc$before$org_apache_hadoop_hdfs_DFSClientAspects$4$1f7d37b0(77)) - FI: before pipelineErrorAfterInit: errorIndex=2
    [junit] 2009-08-25 16:23:52,479 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(235)) - pipeline_Fi_16, errorIndex=2, successfully verified.
    [junit] 2009-08-25 16:23:52,479 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2630)) - Error Recovery for block blk_-7389658038059638367_1001 bad datanode[2] 127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,480 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2674)) - Error Recovery for block blk_-7389658038059638367_1001 in pipeline 127.0.0.1:53608, 127.0.0.1:56241, 127.0.0.1:44609: bad datanode 127.0.0.1:44609
    [junit] 2009-08-25 16:23:52,482 INFO  datanode.DataNode (DataNode.java:logRecoverBlock(1727)) - Client calls recoverBlock(block=blk_-7389658038059638367_1001, targets=[127.0.0.1:53608, 127.0.0.1:56241])
    [junit] 2009-08-25 16:23:52,485 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-7389658038059638367_1001(length=1), newblock=blk_-7389658038059638367_1002(length=1), datanode=127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,487 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-7389658038059638367_1001(length=1), newblock=blk_-7389658038059638367_1002(length=1), datanode=127.0.0.1:56241
    [junit] 2009-08-25 16:23:52,488 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-7389658038059638367_1001, newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:53608, 127.0.0.1:56241], closeFile=false, deleteBlock=false)
    [junit] 2009-08-25 16:23:52,488 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-7389658038059638367_1002) successful
    [junit] 2009-08-25 16:23:52,489 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,489 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-25 16:23:52,490 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1002 src: /127.0.0.1:59618 dest: /127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,490 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-7389658038059638367_1002
    [junit] 2009-08-25 16:23:52,490 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:56241
    [junit] 2009-08-25 16:23:52,491 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-25 16:23:52,491 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1002 src: /127.0.0.1:50553 dest: /127.0.0.1:56241
    [junit] 2009-08-25 16:23:52,491 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-7389658038059638367_1002
    [junit] 2009-08-25 16:23:52,491 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,492 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,492 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,492 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,492 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,492 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-25 16:23:52,494 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:50553, dest: /127.0.0.1:56241, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1922674027, offset: 0, srvID: DS-98040998-67.195.138.9-56241-1251217431041, blockid: blk_-7389658038059638367_1002, duration: 1825731
    [junit] 2009-08-25 16:23:52,494 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-7389658038059638367_1002 terminating
    [junit] 2009-08-25 16:23:52,494 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56241 is added to blk_-7389658038059638367_1002 size 1
    [junit] 2009-08-25 16:23:52,495 INFO  DataNode.clienttrace (BlockReceiver.java:run(955)) - src: /127.0.0.1:59618, dest: /127.0.0.1:53608, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1922674027, offset: 0, srvID: DS-651771013-67.195.138.9-53608-1251217432189, blockid: blk_-7389658038059638367_1002, duration: 2850663
    [junit] 2009-08-25 16:23:52,495 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-7389658038059638367_1002 terminating
    [junit] 2009-08-25 16:23:52,535 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:53608 is added to blk_-7389658038059638367_1002 size 1
    [junit] 2009-08-25 16:23:52,536 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_1922674027
    [junit] 2009-08-25 16:23:52,558 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=open	src=/pipeline_Fi_16/foo	dst=null	perm=null
    [junit] 2009-08-25 16:23:52,559 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:53608
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-25 16:23:52,560 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(418)) - src: /127.0.0.1:53608, dest: /127.0.0.1:59620, bytes: 5, op: HDFS_READ, cliID: DFSClient_1922674027, offset: 0, srvID: DS-651771013-67.195.138.9-53608-1251217432189, blockid: blk_-7389658038059638367_1002, duration: 223811
    [junit] 2009-08-25 16:23:52,561 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:53608
    [junit] 2009-08-25 16:23:52,662 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 36493
    [junit] 2009-08-25 16:23:52,662 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36493
    [junit] 2009-08-25 16:23:52,662 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-25 16:23:52,662 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 36493: exiting
    [junit] 2009-08-25 16:23:52,662 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-25 16:23:52,662 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:53608, storageID=DS-651771013-67.195.138.9-53608-1251217432189, infoPort=56130, ipcPort=36493):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-25 16:23:52,663 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-25 16:23:52,664 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:53608, storageID=DS-651771013-67.195.138.9-53608-1251217432189, infoPort=56130, ipcPort=36493):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-25 16:23:52,664 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 36493
    [junit] 2009-08-25 16:23:52,664 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-25 16:23:52,765 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 33485
    [junit] 2009-08-25 16:23:52,766 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 33485: exiting
    [junit] 2009-08-25 16:23:52,766 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 33485
    [junit] 2009-08-25 16:23:52,766 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-25 16:23:52,766 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-25 16:23:52,766 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-25 16:23:52,767 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-25 16:23:52,767 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] 2009-08-25 16:23:52,767 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 33485
    [junit] 2009-08-25 16:23:52,767 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-25 16:23:52,869 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 36715
    [junit] 2009-08-25 16:23:52,870 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 36715: exiting
    [junit] 2009-08-25 16:23:52,870 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-25 16:23:52,870 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:56241, storageID=DS-98040998-67.195.138.9-56241-1251217431041, infoPort=53933, ipcPort=36715):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-25 16:23:52,870 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-25 16:23:52,870 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36715
    [junit] 2009-08-25 16:23:52,872 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-25 16:23:52,872 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-25 16:23:52,873 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:56241, storageID=DS-98040998-67.195.138.9-56241-1251217431041, infoPort=53933, ipcPort=36715):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] 2009-08-25 16:23:52,873 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 36715
    [junit] 2009-08-25 16:23:52,873 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-25 16:23:52,975 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-25 16:23:52,975 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 45 43 
    [junit] 2009-08-25 16:23:52,975 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-25 16:23:52,981 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 54008
    [junit] 2009-08-25 16:23:52,981 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 54008: exiting
    [junit] 2009-08-25 16:23:52,981 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 54008: exiting
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 54008: exiting
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 54008: exiting
    [junit] 2009-08-25 16:23:52,981 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 54008: exiting
    [junit] 2009-08-25 16:23:52,981 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 54008: exiting
    [junit] 2009-08-25 16:23:52,981 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 54008: exiting
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 54008: exiting
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 54008
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 54008: exiting
    [junit] 2009-08-25 16:23:52,982 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 54008: exiting
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 228.82 sec

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :727: Tests failed!

Total time: 71 minutes 16 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-Hdfs-trunk #60

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/60/

------------------------------------------
[...truncated 225499 lines...]
    [junit] 2009-08-24 15:29:58,782 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 48789 webServer.getConnectors()[0].getLocalPort() returned 48789
    [junit] 2009-08-24 15:29:58,782 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 48789
    [junit] 2009-08-24 15:29:58,783 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-24 15:29:58,843 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:48789
    [junit] 2009-08-24 15:29:58,844 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-24 15:29:58,845 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=58723
    [junit] 2009-08-24 15:29:58,846 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-24 15:29:58,846 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 58723: starting
    [junit] 2009-08-24 15:29:58,846 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:51187, storageID=, infoPort=48789, ipcPort=58723)
    [junit] 2009-08-24 15:29:58,846 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 58723: starting
    [junit] 2009-08-24 15:29:58,849 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51187 storage DS-1737465689-67.195.138.9-51187-1251127798848
    [junit] 2009-08-24 15:29:58,849 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:51187
    [junit] 2009-08-24 15:29:58,907 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1737465689-67.195.138.9-51187-1251127798848 is assigned to data-node 127.0.0.1:51187
    [junit] 2009-08-24 15:29:58,907 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:51187, storageID=DS-1737465689-67.195.138.9-51187-1251127798848, infoPort=48789, ipcPort=58723)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4 
    [junit] 2009-08-24 15:29:58,908 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-24 15:29:58,916 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-08-24 15:29:58,916 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-24 15:29:58,939 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-24 15:29:58,940 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-24 15:29:59,108 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-08-24 15:29:59,109 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-24 15:29:59,408 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-24 15:29:59,408 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 55493
    [junit] 2009-08-24 15:29:59,408 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-24 15:29:59,409 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251142032409 with interval 21600000
    [junit] 2009-08-24 15:29:59,410 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-24 15:29:59,410 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 47165 webServer.getConnectors()[0].getLocalPort() returned 47165
    [junit] 2009-08-24 15:29:59,411 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 47165
    [junit] 2009-08-24 15:29:59,411 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-24 15:29:59,472 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:47165
    [junit] 2009-08-24 15:29:59,472 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-24 15:29:59,473 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=50562
    [junit] 2009-08-24 15:29:59,474 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-24 15:29:59,474 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 50562: starting
    [junit] 2009-08-24 15:29:59,474 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:55493, storageID=, infoPort=47165, ipcPort=50562)
    [junit] 2009-08-24 15:29:59,474 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 50562: starting
    [junit] 2009-08-24 15:29:59,476 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:55493 storage DS-434709537-67.195.138.9-55493-1251127799475
    [junit] 2009-08-24 15:29:59,476 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:55493
    [junit] 2009-08-24 15:29:59,519 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-434709537-67.195.138.9-55493-1251127799475 is assigned to data-node 127.0.0.1:55493
    [junit] 2009-08-24 15:29:59,520 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:55493, storageID=DS-434709537-67.195.138.9-55493-1251127799475, infoPort=47165, ipcPort=50562)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6 
    [junit] 2009-08-24 15:29:59,520 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-24 15:29:59,528 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5  is not formatted.
    [junit] 2009-08-24 15:29:59,529 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-24 15:29:59,557 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-24 15:29:59,557 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-24 15:29:59,721 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6  is not formatted.
    [junit] 2009-08-24 15:29:59,722 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-24 15:29:59,987 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-24 15:29:59,988 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 36230
    [junit] 2009-08-24 15:29:59,988 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-24 15:29:59,988 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251129010988 with interval 21600000
    [junit] 2009-08-24 15:29:59,990 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-24 15:29:59,990 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 56533 webServer.getConnectors()[0].getLocalPort() returned 56533
    [junit] 2009-08-24 15:29:59,990 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 56533
    [junit] 2009-08-24 15:29:59,991 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-24 15:30:00,052 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:56533
    [junit] 2009-08-24 15:30:00,053 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-24 15:30:00,054 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=52683
    [junit] 2009-08-24 15:30:00,055 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-24 15:30:00,055 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 52683: starting
    [junit] 2009-08-24 15:30:00,055 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:36230, storageID=, infoPort=56533, ipcPort=52683)
    [junit] 2009-08-24 15:30:00,055 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 52683: starting
    [junit] 2009-08-24 15:30:00,057 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36230 storage DS-1245667935-67.195.138.9-36230-1251127800056
    [junit] 2009-08-24 15:30:00,057 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,101 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1245667935-67.195.138.9-36230-1251127800056 is assigned to data-node 127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,102 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:36230, storageID=DS-1245667935-67.195.138.9-36230-1251127800056, infoPort=56533, ipcPort=52683)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-24 15:30:00,102 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-24 15:30:00,128 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-24 15:30:00,129 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-24 15:30:00,306 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/pipeline_Fi_16/foo	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-24 15:30:00,308 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_-3596213809731970953_1001
    [junit] 2009-08-24 15:30:00,341 INFO  protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35)) - FI: addBlock Pipeline[127.0.0.1:36230, 127.0.0.1:51187, 127.0.0.1:55493]
    [junit] 2009-08-24 15:30:00,342 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,343 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-24 15:30:00,343 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3596213809731970953_1001 src: /127.0.0.1:49173 dest: /127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,344 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,345 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-24 15:30:00,345 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3596213809731970953_1001 src: /127.0.0.1:37124 dest: /127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,347 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,347 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-24 15:30:00,347 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3596213809731970953_1001 src: /127.0.0.1:44709 dest: /127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,348 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,348 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,349 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,350 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,350 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,350 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,350 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,350 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,350 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(151)) - FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,351 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,351 WARN  datanode.DataNode (DataNode.java:checkDiskError(702)) - checkDiskError: exception: 
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:55493
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-24 15:30:00,352 INFO  mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current 
    [junit] 2009-08-24 15:30:00,352 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block blk_-3596213809731970953_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,352 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block blk_-3596213809731970953_1001 Interrupted.
    [junit] 2009-08-24 15:30:00,352 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-3596213809731970953_1001 terminating
    [junit] 2009-08-24 15:30:00,352 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-3596213809731970953_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,353 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:55493, storageID=DS-434709537-67.195.138.9-55493-1251127799475, infoPort=47165, ipcPort=50562):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:55493
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-24 15:30:00,354 INFO  datanode.DataNode (BlockReceiver.java:run(917)) - PacketResponder blk_-3596213809731970953_1001 1 Exception java.io.EOFException
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-24 15:30:00,354 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-3596213809731970953_1001 terminating
    [junit] 2009-08-24 15:30:00,355 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 2 for block blk_-3596213809731970953_1001 terminating
    [junit] 2009-08-24 15:30:00,394 WARN  hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception  for block blk_-3596213809731970953_1001java.io.IOException: Bad response ERROR for block blk_-3596213809731970953_1001 from datanode 127.0.0.1:55493
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
    [junit] 
    [junit] 2009-08-24 15:30:00,395 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2630)) - Error Recovery for block blk_-3596213809731970953_1001 bad datanode[2] 127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,395 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2674)) - Error Recovery for block blk_-3596213809731970953_1001 in pipeline 127.0.0.1:36230, 127.0.0.1:51187, 127.0.0.1:55493: bad datanode 127.0.0.1:55493
    [junit] 2009-08-24 15:30:00,397 INFO  datanode.DataNode (DataNode.java:logRecoverBlock(1727)) - Client calls recoverBlock(block=blk_-3596213809731970953_1001, targets=[127.0.0.1:36230, 127.0.0.1:51187])
    [junit] 2009-08-24 15:30:00,401 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-3596213809731970953_1001(length=1), newblock=blk_-3596213809731970953_1002(length=1), datanode=127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,402 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-3596213809731970953_1001(length=1), newblock=blk_-3596213809731970953_1002(length=1), datanode=127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,403 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-3596213809731970953_1001, newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:36230, 127.0.0.1:51187], closeFile=false, deleteBlock=false)
    [junit] 2009-08-24 15:30:00,403 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-3596213809731970953_1002) successful
    [junit] 2009-08-24 15:30:00,405 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,405 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-24 15:30:00,405 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3596213809731970953_1002 src: /127.0.0.1:49178 dest: /127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,405 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-3596213809731970953_1002
    [junit] 2009-08-24 15:30:00,406 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,406 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-24 15:30:00,406 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3596213809731970953_1002 src: /127.0.0.1:37129 dest: /127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,406 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-3596213809731970953_1002
    [junit] 2009-08-24 15:30:00,407 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:36230
    [junit] 2009-08-24 15:30:00,408 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,408 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,408 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,408 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,408 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-24 15:30:00,481 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:37129, dest: /127.0.0.1:51187, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_766334557, offset: 0, srvID: DS-1737465689-67.195.138.9-51187-1251127798848, blockid: blk_-3596213809731970953_1002, duration: 74295043
    [junit] 2009-08-24 15:30:00,482 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-3596213809731970953_1002 terminating
    [junit] 2009-08-24 15:30:00,483 INFO  DataNode.clienttrace (BlockReceiver.java:run(955)) - src: /127.0.0.1:49178, dest: /127.0.0.1:36230, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_766334557, offset: 0, srvID: DS-1245667935-67.195.138.9-36230-1251127800056, blockid: blk_-3596213809731970953_1002, duration: 74937220
    [junit] 2009-08-24 15:30:00,483 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51187 is added to blk_-3596213809731970953_1002 size 1
    [junit] 2009-08-24 15:30:00,483 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-3596213809731970953_1002 terminating
    [junit] 2009-08-24 15:30:00,484 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36230 is added to blk_-3596213809731970953_1002 size 1
    [junit] 2009-08-24 15:30:00,485 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_766334557
    [junit] 2009-08-24 15:30:00,501 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=open	src=/pipeline_Fi_16/foo	dst=null	perm=null
    [junit] 2009-08-24 15:30:00,502 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:51187
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-24 15:30:00,503 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:51187, dest: /127.0.0.1:37130, bytes: 5, op: HDFS_READ, cliID: DFSClient_766334557, offset: 0, srvID: DS-1737465689-67.195.138.9-51187-1251127798848, blockid: blk_-3596213809731970953_1002, duration: 239131
    [junit] 2009-08-24 15:30:00,503 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:51187
    [junit] 2009-08-24 15:30:00,504 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 52683
    [junit] 2009-08-24 15:30:00,505 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 52683: exiting
    [junit] 2009-08-24 15:30:00,505 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 52683
    [junit] 2009-08-24 15:30:00,505 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-24 15:30:00,505 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-24 15:30:00,505 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:36230, storageID=DS-1245667935-67.195.138.9-36230-1251127800056, infoPort=56533, ipcPort=52683):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-24 15:30:00,507 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-24 15:30:00,508 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-24 15:30:00,508 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:36230, storageID=DS-1245667935-67.195.138.9-36230-1251127800056, infoPort=56533, ipcPort=52683):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-24 15:30:00,508 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 52683
    [junit] 2009-08-24 15:30:00,508 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-24 15:30:00,622 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 50562
    [junit] 2009-08-24 15:30:00,623 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 50562: exiting
    [junit] 2009-08-24 15:30:00,623 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-24 15:30:00,623 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 50562
    [junit] 2009-08-24 15:30:00,623 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:55493, storageID=DS-434709537-67.195.138.9-55493-1251127799475, infoPort=47165, ipcPort=50562):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-24 15:30:00,623 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-24 15:30:00,626 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-24 15:30:00,626 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-24 15:30:00,626 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:55493, storageID=DS-434709537-67.195.138.9-55493-1251127799475, infoPort=47165, ipcPort=50562):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] 2009-08-24 15:30:00,626 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 50562
    [junit] 2009-08-24 15:30:00,626 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-24 15:30:00,802 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 58723
    [junit] 2009-08-24 15:30:00,803 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 58723: exiting
    [junit] 2009-08-24 15:30:00,804 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 58723
    [junit] 2009-08-24 15:30:00,804 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:51187, storageID=DS-1737465689-67.195.138.9-51187-1251127798848, infoPort=48789, ipcPort=58723):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-24 15:30:00,804 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-24 15:30:00,804 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-24 15:30:00,805 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-24 15:30:00,805 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:51187, storageID=DS-1737465689-67.195.138.9-51187-1251127798848, infoPort=48789, ipcPort=58723):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] 2009-08-24 15:30:00,805 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 58723
    [junit] 2009-08-24 15:30:00,806 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-24 15:30:00,946 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-24 15:30:00,946 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 68 40 
    [junit] 2009-08-24 15:30:00,946 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-24 15:30:00,958 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 39380
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 39380: exiting
    [junit] 2009-08-24 15:30:00,959 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 39380: exiting
    [junit] 2009-08-24 15:30:00,960 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 39380: exiting
    [junit] 2009-08-24 15:30:00,960 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-24 15:30:00,960 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 39380
    [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 73.905 sec

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed!

Total time: 60 minutes 56 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-Hdfs-trunk #59

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/59/

------------------------------------------
[...truncated 226537 lines...]
    [junit] 2009-08-23 12:47:51,082 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-23 12:47:51,083 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-23 12:47:51,230 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6  is not formatted.
    [junit] 2009-08-23 12:47:51,230 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-23 12:47:51,539 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-23 12:47:51,539 INFO  datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 48509
    [junit] 2009-08-23 12:47:51,539 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-23 12:47:51,540 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251036241540 with interval 21600000
    [junit] 2009-08-23 12:47:51,541 INFO  http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-23 12:47:51,541 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 52070 webServer.getConnectors()[0].getLocalPort() returned 52070
    [junit] 2009-08-23 12:47:51,542 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 52070
    [junit] 2009-08-23 12:47:51,542 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-23 12:47:51,601 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:52070
    [junit] 2009-08-23 12:47:51,601 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-23 12:47:51,602 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=48832
    [junit] 2009-08-23 12:47:51,603 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-23 12:47:51,603 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 48832: starting
    [junit] 2009-08-23 12:47:51,603 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 48832: starting
    [junit] 2009-08-23 12:47:51,604 INFO  datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:48509, storageID=, infoPort=52070, ipcPort=48832)
    [junit] 2009-08-23 12:47:51,605 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:48509 storage DS-830920231-67.195.138.9-48509-1251031671604
    [junit] 2009-08-23 12:47:51,606 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,650 INFO  datanode.DataNode (DataNode.java:register(571)) - New storage id DS-830920231-67.195.138.9-48509-1251031671604 is assigned to data-node 127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,651 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:48509, storageID=DS-830920231-67.195.138.9-48509-1251031671604, infoPort=52070, ipcPort=48832)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-23 12:47:51,651 INFO  datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-23 12:47:51,688 INFO  datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 2 msecs
    [junit] 2009-08-23 12:47:51,688 INFO  datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
    [junit] 2009-08-23 12:47:51,841 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/pipeline_Fi_16/foo	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-23 12:47:51,844 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_-3823116340774608643_1001
    [junit] 2009-08-23 12:47:51,883 INFO  protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35)) - FI: addBlock Pipeline[127.0.0.1:33089, 127.0.0.1:48509, 127.0.0.1:42956]
    [junit] 2009-08-23 12:47:51,884 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,885 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,885 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3823116340774608643_1001 src: /127.0.0.1:59253 dest: /127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,886 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,887 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,887 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3823116340774608643_1001 src: /127.0.0.1:45683 dest: /127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,888 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,888 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,888 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3823116340774608643_1001 src: /127.0.0.1:48571 dest: /127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,889 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,890 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,890 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,891 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,892 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,892 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,892 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(151)) - FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,892 WARN  datanode.DataNode (DataNode.java:checkDiskError(702)) - checkDiskError: exception: 
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-23 12:47:51,893 INFO  mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current 
    [junit] 2009-08-23 12:47:51,893 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block blk_-3823116340774608643_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block blk_-3823116340774608643_1001 Interrupted.
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-3823116340774608643_1001 terminating
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-3823116340774608643_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,894 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:42956, storageID=DS-1038161714-67.195.138.9-42956-1251031670395, infoPort=48218, ipcPort=60502):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:42956
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-23 12:47:51,894 INFO  datanode.DataNode (BlockReceiver.java:run(917)) - PacketResponder blk_-3823116340774608643_1001 1 Exception java.io.EOFException
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:51,926 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-3823116340774608643_1001 terminating
    [junit] 2009-08-23 12:47:51,926 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 2 for block blk_-3823116340774608643_1001 terminating
    [junit] 2009-08-23 12:47:51,927 WARN  hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception  for block blk_-3823116340774608643_1001java.io.IOException: Bad response ERROR for block blk_-3823116340774608643_1001 from datanode 127.0.0.1:42956
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
    [junit] 
    [junit] 2009-08-23 12:47:51,927 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2630)) - Error Recovery for block blk_-3823116340774608643_1001 bad datanode[2] 127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,927 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2674)) - Error Recovery for block blk_-3823116340774608643_1001 in pipeline 127.0.0.1:33089, 127.0.0.1:48509, 127.0.0.1:42956: bad datanode 127.0.0.1:42956
    [junit] 2009-08-23 12:47:51,930 INFO  datanode.DataNode (DataNode.java:logRecoverBlock(1727)) - Client calls recoverBlock(block=blk_-3823116340774608643_1001, targets=[127.0.0.1:33089, 127.0.0.1:48509])
    [junit] 2009-08-23 12:47:51,935 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-3823116340774608643_1001(length=1), newblock=blk_-3823116340774608643_1002(length=1), datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,937 INFO  datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-3823116340774608643_1001(length=1), newblock=blk_-3823116340774608643_1002(length=1), datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,938 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-3823116340774608643_1001, newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:33089, 127.0.0.1:48509], closeFile=false, deleteBlock=false)
    [junit] 2009-08-23 12:47:51,939 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-3823116340774608643_1002) successful
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3823116340774608643_1002 src: /127.0.0.1:59258 dest: /127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,941 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-3823116340774608643_1002
    [junit] 2009-08-23 12:47:51,942 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,942 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
    [junit] 2009-08-23 12:47:51,942 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-3823116340774608643_1002 src: /127.0.0.1:45688 dest: /127.0.0.1:48509
    [junit] 2009-08-23 12:47:51,943 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-3823116340774608643_1002
    [junit] 2009-08-23 12:47:51,943 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:33089
    [junit] 2009-08-23 12:47:51,944 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,944 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,944 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,945 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,945 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
    [junit] 2009-08-23 12:47:51,946 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:45688, dest: /127.0.0.1:48509, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1407762613, offset: 0, srvID: DS-830920231-67.195.138.9-48509-1251031671604, blockid: blk_-3823116340774608643_1002, duration: 2397143
    [junit] 2009-08-23 12:47:51,987 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-3823116340774608643_1002 terminating
    [junit] 2009-08-23 12:47:51,987 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:48509 is added to blk_-3823116340774608643_1002 size 1
    [junit] 2009-08-23 12:47:51,989 INFO  DataNode.clienttrace (BlockReceiver.java:run(955)) - src: /127.0.0.1:59258, dest: /127.0.0.1:33089, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1407762613, offset: 0, srvID: DS-2088916918-67.195.138.9-33089-1251031670997, blockid: blk_-3823116340774608643_1002, duration: 45005337
    [junit] 2009-08-23 12:47:51,990 INFO  datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-3823116340774608643_1002 terminating
    [junit] 2009-08-23 12:47:51,990 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:33089 is added to blk_-3823116340774608643_1002 size 1
    [junit] 2009-08-23 12:47:51,992 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_1407762613
    [junit] 2009-08-23 12:47:52,003 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=open	src=/pipeline_Fi_16/foo	dst=null	perm=null
    [junit] 2009-08-23 12:47:52,005 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:48509
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-23 12:47:52,006 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:48509, dest: /127.0.0.1:45689, bytes: 5, op: HDFS_READ, cliID: DFSClient_1407762613, offset: 0, srvID: DS-830920231-67.195.138.9-48509-1251031671604, blockid: blk_-3823116340774608643_1002, duration: 250492
    [junit] 2009-08-23 12:47:52,007 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:48509
    [junit] 2009-08-23 12:47:52,031 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 1 time(s).
    [junit] 2009-08-23 12:47:52,108 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 48832
    [junit] 2009-08-23 12:47:52,108 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 48832: exiting
    [junit] 2009-08-23 12:47:52,108 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 48832
    [junit] 2009-08-23 12:47:52,109 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-23 12:47:52,109 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:48509, storageID=DS-830920231-67.195.138.9-48509-1251031671604, infoPort=52070, ipcPort=48832):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:52,109 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-23 12:47:52,110 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-23 12:47:52,110 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:48509, storageID=DS-830920231-67.195.138.9-48509-1251031671604, infoPort=52070, ipcPort=48832):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'} 
    [junit] 2009-08-23 12:47:52,111 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 48832
    [junit] 2009-08-23 12:47:52,111 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-23 12:47:52,231 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 58600
    [junit] 2009-08-23 12:47:52,231 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 58600: exiting
    [junit] 2009-08-23 12:47:52,233 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 58600
    [junit] 2009-08-23 12:47:52,233 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-23 12:47:52,233 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-23 12:47:52,233 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:33089, storageID=DS-2088916918-67.195.138.9-33089-1251031670997, infoPort=49467, ipcPort=58600):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:52,235 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-23 12:47:52,236 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-23 12:47:52,237 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:33089, storageID=DS-2088916918-67.195.138.9-33089-1251031670997, infoPort=49467, ipcPort=58600):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'} 
    [junit] 2009-08-23 12:47:52,237 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 58600
    [junit] 2009-08-23 12:47:52,237 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-23 12:47:52,340 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 60502
    [junit] 2009-08-23 12:47:52,341 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 60502
    [junit] 2009-08-23 12:47:52,342 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-23 12:47:52,342 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:42956, storageID=DS-1038161714-67.195.138.9-42956-1251031670395, infoPort=48218, ipcPort=60502):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-23 12:47:52,342 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-23 12:47:52,341 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 60502: exiting
    [junit] 2009-08-23 12:47:52,343 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-23 12:47:52,343 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:42956, storageID=DS-1038161714-67.195.138.9-42956-1251031670395, infoPort=48218, ipcPort=60502):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'} 
    [junit] 2009-08-23 12:47:52,343 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 60502
    [junit] 2009-08-23 12:47:52,343 INFO  datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-23 12:47:52,473 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-23 12:47:52,473 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 56 23 
    [junit] 2009-08-23 12:47:52,473 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server on 46463
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 46463: exiting
    [junit] 2009-08-23 12:47:52,495 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 46463
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 46463: exiting
    [junit] 2009-08-23 12:47:52,496 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 46463: exiting
    [junit] 2009-08-23 12:47:52,497 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] Tests run: 7, Failures: 0, Errors: 1, Time elapsed: 84.843 sec
    [junit] 2009-08-23 12:47:52,544 ERROR hdfs.DFSClient (DFSClient.java:close(1084)) - Exception closing file /pipeline_Fi_12/foo : java.io.IOException: Bad connect ack with firstBadLink as 127.0.0.1:58044
    [junit] java.io.IOException: Bad connect ack with firstBadLink as 127.0.0.1:58044
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.createBlockOutputStream(DFSClient.java:2865)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSClient.java:2789)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2399)
    [junit] 2009-08-23 12:47:52,545 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 2 time(s).
    [junit] 2009-08-23 12:47:53,545 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 3 time(s).
    [junit] 2009-08-23 12:47:54,545 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 4 time(s).
    [junit] 2009-08-23 12:47:55,546 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 5 time(s).
    [junit] 2009-08-23 12:47:56,546 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 6 time(s).
    [junit] 2009-08-23 12:47:57,546 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 7 time(s).
    [junit] 2009-08-23 12:47:58,547 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 8 time(s).
    [junit] 2009-08-23 12:47:59,547 INFO  ipc.Client (Client.java:handleConnectionFailure(395)) - Retrying connect to server: localhost/127.0.0.1:49694. Already tried 9 time(s).
    [junit] 2009-08-23 12:47:59,548 WARN  hdfs.DFSClient (DFSClient.java:run(1140)) - Problem renewing lease for DFSClient_-805227971 for a period of 0 seconds. Will retry shortly...
    [junit] java.net.ConnectException: Call to localhost/127.0.0.1:49694 failed on connection exception: java.net.ConnectException: Connection refused
    [junit] 	at org.apache.hadoop.ipc.Client.wrapException(Client.java:793)
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:769)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:223)
    [junit] 	at $Proxy5.renewLease(Unknown Source)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy5.renewLease(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.renew(DFSClient.java:1115)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1131)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] Caused by: java.net.ConnectException: Connection refused
    [junit] 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    [junit] 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
    [junit] 	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    [junit] 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:368)
    [junit] 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:325)
    [junit] 	at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:197)
    [junit] 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:886)
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:746)
    [junit] 	... 12 more
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol FAILED

checkfailure:
    [touch] Creating http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/testsfailed 

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :722: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :363: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :646: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :641: The following error occurred while executing this line:
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :705: Tests failed!

Total time: 74 minutes 6 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...