You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.zones.apache.org> on 2009/04/13 11:56:58 UTC

Build failed in Hudson: Hadoop-trunk #805

See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/805/changes

Changes:

[gkesavan] Increasing the java max heap

[yhemanth] HADOOP-5485. Mask actions in the fair scheduler's servlet UI based on value of webinterface.private.actions. Contributed by Vinod Kumar Vavilapalli.

------------------------------------------
[...truncated 530740 lines...]
    [junit] 2009-04-13 10:09:46,929 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 40454
    [junit] 2009-04-13 10:09:46,929 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-13 10:09:46,930 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239627829930 with interval 21600000
    [junit] 2009-04-13 10:09:46,932 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 46694
    [junit] 2009-04-13 10:09:46,932 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-13 10:09:47,009 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:46694
    [junit] 2009-04-13 10:09:47,009 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-13 10:09:47,011 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=42407
    [junit] 2009-04-13 10:09:47,011 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-13 10:09:47,012 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 42407: starting
    [junit] 2009-04-13 10:09:47,012 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:40454, storageID=, infoPort=46694, ipcPort=42407)
    [junit] 2009-04-13 10:09:47,012 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 42407: starting
    [junit] 2009-04-13 10:09:47,011 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 42407: starting
    [junit] 2009-04-13 10:09:47,011 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 42407: starting
    [junit] 2009-04-13 10:09:47,014 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:40454 storage DS-1951101531-67.195.138.9-40454-1239617387013
    [junit] 2009-04-13 10:09:47,014 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:40454
    [junit] 2009-04-13 10:09:47,016 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1951101531-67.195.138.9-40454-1239617387013 is assigned to data-node 127.0.0.1:40454
    [junit] 2009-04-13 10:09:47,017 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:40454, storageID=DS-1951101531-67.195.138.9-40454-1239617387013, infoPort=46694, ipcPort=42407)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-04-13 10:09:47,039 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-13 10:09:47,048 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-04-13 10:09:47,049 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-13 10:09:47,059 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-04-13 10:09:47,060 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-13 10:09:47,076 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-13 10:09:47,077 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-13 10:09:47,097 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-13 10:09:47,098 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 52203
    [junit] 2009-04-13 10:09:47,098 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-13 10:09:47,098 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239635642098 with interval 21600000
    [junit] 2009-04-13 10:09:47,100 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 42364
    [junit] 2009-04-13 10:09:47,100 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-13 10:09:47,168 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:42364
    [junit] 2009-04-13 10:09:47,169 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-13 10:09:47,170 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=47466
    [junit] 2009-04-13 10:09:47,171 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-13 10:09:47,171 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 47466: starting
    [junit] 2009-04-13 10:09:47,171 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 47466: starting
    [junit] 2009-04-13 10:09:47,172 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 47466: starting
    [junit] 2009-04-13 10:09:47,172 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:52203, storageID=, infoPort=42364, ipcPort=47466)
    [junit] 2009-04-13 10:09:47,172 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 47466: starting
    [junit] 2009-04-13 10:09:47,174 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:52203 storage DS-2040718447-67.195.138.9-52203-1239617387173
    [junit] 2009-04-13 10:09:47,174 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:52203
    [junit] 2009-04-13 10:09:47,176 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-2040718447-67.195.138.9-52203-1239617387173 is assigned to data-node 127.0.0.1:52203
    [junit] 2009-04-13 10:09:47,177 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:52203, storageID=DS-2040718447-67.195.138.9-52203-1239617387173, infoPort=42364, ipcPort=47466)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-13 10:09:47,183 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-13 10:09:47,213 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 10:09:47,214 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 10:09:47,222 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-13 10:09:47,222 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-13 10:09:47,227 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 10:09:47,227 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-13 10:09:47,230 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_-7127448766874534415_1001
    [junit] 2009-04-13 10:09:47,232 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-7127448766874534415_1001 src: /127.0.0.1:60935 dest: /127.0.0.1:52203
    [junit] 2009-04-13 10:09:47,234 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-7127448766874534415_1001 src: /127.0.0.1:50552 dest: /127.0.0.1:40454
    [junit] 2009-04-13 10:09:47,236 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:50552, dest: /127.0.0.1:40454, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1114645871, offset: 0, srvID: DS-1951101531-67.195.138.9-40454-1239617387013, blockid: blk_-7127448766874534415_1001
    [junit] 2009-04-13 10:09:47,236 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-7127448766874534415_1001 terminating
    [junit] 2009-04-13 10:09:47,237 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:40454 is added to blk_-7127448766874534415_1001 size 4096
    [junit] 2009-04-13 10:09:47,237 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:60935, dest: /127.0.0.1:52203, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1114645871, offset: 0, srvID: DS-2040718447-67.195.138.9-52203-1239617387173, blockid: blk_-7127448766874534415_1001
    [junit] 2009-04-13 10:09:47,238 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-7127448766874534415_1001 terminating
    [junit] 2009-04-13 10:09:47,238 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:52203 is added to blk_-7127448766874534415_1001 size 4096
    [junit] 2009-04-13 10:09:47,239 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_-2404219822504591506_1001
    [junit] 2009-04-13 10:09:47,241 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-2404219822504591506_1001 src: /127.0.0.1:60937 dest: /127.0.0.1:52203
    [junit] 2009-04-13 10:09:47,242 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-2404219822504591506_1001 src: /127.0.0.1:50554 dest: /127.0.0.1:40454
    [junit] 2009-04-13 10:09:47,243 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:50554, dest: /127.0.0.1:40454, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1114645871, offset: 0, srvID: DS-1951101531-67.195.138.9-40454-1239617387013, blockid: blk_-2404219822504591506_1001
    [junit] 2009-04-13 10:09:47,244 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-2404219822504591506_1001 terminating
    [junit] 2009-04-13 10:09:47,244 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:40454 is added to blk_-2404219822504591506_1001 size 4096
    [junit] 2009-04-13 10:09:47,245 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:60937, dest: /127.0.0.1:52203, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1114645871, offset: 0, srvID: DS-2040718447-67.195.138.9-52203-1239617387173, blockid: blk_-2404219822504591506_1001
    [junit] 2009-04-13 10:09:47,245 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:52203 is added to blk_-2404219822504591506_1001 size 4096
    [junit] 2009-04-13 10:09:47,245 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-2404219822504591506_1001 terminating
    [junit] 2009-04-13 10:09:47,246 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 10:09:47,248 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-435320670
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1986177400
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1727249633
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2104421858
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort42407
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort47466
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-13 10:09:47,351 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 47466
    [junit] 2009-04-13 10:09:47,351 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 47466: exiting
    [junit] 2009-04-13 10:09:47,351 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 47466: exiting
    [junit] 2009-04-13 10:09:47,352 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:52203, storageID=DS-2040718447-67.195.138.9-52203-1239617387173, infoPort=42364, ipcPort=47466):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-13 10:09:47,352 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-13 10:09:47,352 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 47466: exiting
    [junit] 2009-04-13 10:09:47,352 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-13 10:09:47,351 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 47466
    [junit] 2009-04-13 10:09:47,352 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-13 10:09:47,353 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:52203, storageID=DS-2040718447-67.195.138.9-52203-1239617387173, infoPort=42364, ipcPort=47466):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-13 10:09:47,353 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 47466
    [junit] 2009-04-13 10:09:47,354 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-13 10:09:47,492 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 42407
    [junit] 2009-04-13 10:09:47,492 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 42407: exiting
    [junit] 2009-04-13 10:09:47,492 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 42407: exiting
    [junit] 2009-04-13 10:09:47,492 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 42407: exiting
    [junit] 2009-04-13 10:09:47,493 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-13 10:09:47,492 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-13 10:09:47,492 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 42407
    [junit] 2009-04-13 10:09:47,493 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:40454, storageID=DS-1951101531-67.195.138.9-40454-1239617387013, infoPort=46694, ipcPort=42407):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-13 10:09:47,494 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-13 10:09:47,494 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:40454, storageID=DS-1951101531-67.195.138.9-40454-1239617387013, infoPort=46694, ipcPort=42407):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-13 10:09:47,494 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 42407
    [junit] 2009-04-13 10:09:47,494 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-13 10:09:47,596 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2352)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-13 10:09:47,596 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 14 1 
    [junit] 2009-04-13 10:09:47,596 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-13 10:09:47,597 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 46535
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 46535: exiting
    [junit] 2009-04-13 10:09:47,598 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 46535
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.073 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.095 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-13 10:09:48,540 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-13 10:09:48,553 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: -1836054109297996847(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: -2596117234015333285(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 357/228(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -716207878806992244(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: -4659769549955560815(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: -2482031780503863692(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 423/493(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -3255303311733533427(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.002 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-13 10:09:50,370 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-04-13 10:09:50,877 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 9129
    [junit] 2009-04-13 10:09:50,952 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 9132 9131 9129 ]
    [junit] 2009-04-13 10:09:57,535 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 9135 9133 9131 9129 9145 9143 9141 9139 9137 ]
    [junit] 2009-04-13 10:09:57,546 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-13 10:09:57,546 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-04-13 10:09:57,547 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 9129 with SIGTERM. Exit code 0
    [junit] 2009-04-13 10:09:57,597 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.323 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-13 10:09:58,488 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.574 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.183 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.091 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 180 minutes 24 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Hudson build is back to normal: Hadoop-trunk #811

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/811/changes



Build failed in Hudson: Hadoop-trunk #810

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/810/changes

Changes:

[sharad] HADOOP-5691. Makes org.apache.hadoop.mapreduce.Reducer concrete class. Contributed by Amareshwari.

[ddas] HADOOP-5646. Fixes a problem in TestQueueCapacities.  Contributed by Vinod Kumar Vavilapalli.

[sharad] HADOOP-5647. Fix TestJobHistory to not depend on /tmp. Contributed by Ravi Gummadi.

[sharad] HADOOP-5533. Reverted in 0.20 as branch is frozen, vote being out for 0.20 release.

[sharad] HADOOP-5533. Recovery duration shown on the jobtracker webpage is inaccurate. Contributed by Amar Kamat.

[hairong] HADOOP-5638. More improvement on block placement performance. Contributed by Hairong Kuang.

[hairong] HADOOP-5655. TestMRServerPorts fails on java.net.BindException. Contributed by Devaraj Das.

[yhemanth] HADOOP-4490. Provide ability to run tasks as job owners. Contributed by Sreekanth Ramakrishnan.

[yhemanth] HADOOP-5396. Provide ability to refresh queue ACLs in the JobTracker without having to restart the daemon. Contributed by Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli.

------------------------------------------
[...truncated 437438 lines...]
    [junit] 2009-04-17 19:16:34,789 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 49847
    [junit] 2009-04-17 19:16:34,789 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-17 19:16:34,789 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1240009014789 with interval 21600000
    [junit] 2009-04-17 19:16:34,791 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 44201
    [junit] 2009-04-17 19:16:34,791 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-17 19:16:34,862 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:44201
    [junit] 2009-04-17 19:16:34,863 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-17 19:16:34,864 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=55657
    [junit] 2009-04-17 19:16:34,865 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-17 19:16:34,865 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 55657: starting
    [junit] 2009-04-17 19:16:34,865 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 55657: starting
    [junit] 2009-04-17 19:16:34,866 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:49847, storageID=, infoPort=44201, ipcPort=55657)
    [junit] 2009-04-17 19:16:34,866 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 55657: starting
    [junit] 2009-04-17 19:16:34,867 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 55657: starting
    [junit] 2009-04-17 19:16:34,867 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:49847 storage DS-390729134-67.195.138.9-49847-1239995794866
    [junit] 2009-04-17 19:16:34,869 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:49847
    [junit] 2009-04-17 19:16:34,871 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-390729134-67.195.138.9-49847-1239995794866 is assigned to data-node 127.0.0.1:49847
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-04-17 19:16:34,871 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:49847, storageID=DS-390729134-67.195.138.9-49847-1239995794866, infoPort=44201, ipcPort=55657)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-17 19:16:34,872 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-17 19:16:34,880 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-04-17 19:16:34,881 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-17 19:16:34,893 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-04-17 19:16:34,893 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-17 19:16:34,910 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-17 19:16:34,911 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-17 19:16:34,925 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-17 19:16:34,926 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 42990
    [junit] 2009-04-17 19:16:34,926 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-17 19:16:34,927 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1240015733927 with interval 21600000
    [junit] 2009-04-17 19:16:34,928 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 33659
    [junit] 2009-04-17 19:16:34,929 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-17 19:16:34,997 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:33659
    [junit] 2009-04-17 19:16:34,997 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-17 19:16:34,999 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=39678
    [junit] 2009-04-17 19:16:35,000 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-17 19:16:35,000 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 39678: starting
    [junit] 2009-04-17 19:16:35,000 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 39678: starting
    [junit] 2009-04-17 19:16:35,000 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 39678: starting
    [junit] 2009-04-17 19:16:35,001 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 39678: starting
    [junit] 2009-04-17 19:16:35,000 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:42990, storageID=, infoPort=33659, ipcPort=39678)
    [junit] 2009-04-17 19:16:35,004 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:42990 storage DS-1632591988-67.195.138.9-42990-1239995795003
    [junit] 2009-04-17 19:16:35,005 INFO  net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:42990
    [junit] 2009-04-17 19:16:35,007 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1632591988-67.195.138.9-42990-1239995795003 is assigned to data-node 127.0.0.1:42990
    [junit] 2009-04-17 19:16:35,008 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-1632591988-67.195.138.9-42990-1239995795003, infoPort=33659, ipcPort=39678)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-17 19:16:35,014 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-17 19:16:35,035 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-17 19:16:35,043 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-17 19:16:35,052 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-17 19:16:35,052 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-17 19:16:35,055 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-17 19:16:35,055 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-17 19:16:35,059 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_7266987400406218604_1001
    [junit] 2009-04-17 19:16:35,062 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_7266987400406218604_1001 src: /127.0.0.1:46297 dest: /127.0.0.1:42990
    [junit] 2009-04-17 19:16:35,063 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_7266987400406218604_1001 src: /127.0.0.1:39724 dest: /127.0.0.1:49847
    [junit] 2009-04-17 19:16:35,065 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:39724, dest: /127.0.0.1:49847, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-390729134-67.195.138.9-49847-1239995794866, blockid: blk_7266987400406218604_1001
    [junit] 2009-04-17 19:16:35,066 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49847 is added to blk_7266987400406218604_1001 size 4096
    [junit] 2009-04-17 19:16:35,066 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_7266987400406218604_1001 terminating
    [junit] 2009-04-17 19:16:35,066 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:46297, dest: /127.0.0.1:42990, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-1632591988-67.195.138.9-42990-1239995795003, blockid: blk_7266987400406218604_1001
    [junit] 2009-04-17 19:16:35,067 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_7266987400406218604_1001 terminating
    [junit] 2009-04-17 19:16:35,067 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:42990 is added to blk_7266987400406218604_1001 size 4096
    [junit] 2009-04-17 19:16:35,069 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_-6102255395417213207_1001
    [junit] 2009-04-17 19:16:35,070 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-6102255395417213207_1001 src: /127.0.0.1:46299 dest: /127.0.0.1:42990
    [junit] 2009-04-17 19:16:35,071 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-6102255395417213207_1001 src: /127.0.0.1:39726 dest: /127.0.0.1:49847
    [junit] 2009-04-17 19:16:35,073 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:39726, dest: /127.0.0.1:49847, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-390729134-67.195.138.9-49847-1239995794866, blockid: blk_-6102255395417213207_1001
    [junit] 2009-04-17 19:16:35,073 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-6102255395417213207_1001 terminating
    [junit] 2009-04-17 19:16:35,073 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49847 is added to blk_-6102255395417213207_1001 size 4096
    [junit] 2009-04-17 19:16:35,074 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:46299, dest: /127.0.0.1:42990, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-1632591988-67.195.138.9-42990-1239995795003, blockid: blk_-6102255395417213207_1001
    [junit] 2009-04-17 19:16:35,074 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-6102255395417213207_1001 terminating
    [junit] 2009-04-17 19:16:35,075 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:42990 is added to blk_-6102255395417213207_1001 size 4096
    [junit] 2009-04-17 19:16:35,076 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1704640536
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId745608920
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1623638996
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-2024658520
    [junit] 2009-04-17 19:16:35,076 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort39678
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort55657
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-17 19:16:35,179 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 39678
    [junit] 2009-04-17 19:16:35,180 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 39678: exiting
    [junit] 2009-04-17 19:16:35,180 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 39678: exiting
    [junit] 2009-04-17 19:16:35,180 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 39678: exiting
    [junit] 2009-04-17 19:16:35,180 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-1632591988-67.195.138.9-42990-1239995795003, infoPort=33659, ipcPort=39678):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-17 19:16:35,180 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 39678
    [junit] 2009-04-17 19:16:35,180 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-17 19:16:35,181 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-17 19:16:35,181 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-17 19:16:35,182 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-1632591988-67.195.138.9-42990-1239995795003, infoPort=33659, ipcPort=39678):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-17 19:16:35,182 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 39678
    [junit] 2009-04-17 19:16:35,182 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-17 19:16:35,283 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 55657
    [junit] 2009-04-17 19:16:35,283 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 55657: exiting
    [junit] 2009-04-17 19:16:35,284 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 55657: exiting
    [junit] 2009-04-17 19:16:35,284 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 55657: exiting
    [junit] 2009-04-17 19:16:35,284 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-17 19:16:35,284 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 55657
    [junit] 2009-04-17 19:16:35,284 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:49847, storageID=DS-390729134-67.195.138.9-49847-1239995794866, infoPort=44201, ipcPort=55657):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-17 19:16:35,284 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-17 19:16:35,285 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-17 19:16:35,286 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:49847, storageID=DS-390729134-67.195.138.9-49847-1239995794866, infoPort=44201, ipcPort=55657):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-17 19:16:35,286 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 55657
    [junit] 2009-04-17 19:16:35,286 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-17 19:16:35,387 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2359)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-17 19:16:35,387 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 3Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 10 0 
    [junit] 2009-04-17 19:16:35,387 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-17 19:16:35,389 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-17 19:16:35,389 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 54034
    [junit] 2009-04-17 19:16:35,389 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 54034: exiting
    [junit] 2009-04-17 19:16:35,389 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 54034: exiting
    [junit] 2009-04-17 19:16:35,390 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 54034: exiting
    [junit] 2009-04-17 19:16:35,390 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 54034: exiting
    [junit] 2009-04-17 19:16:35,390 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 54034: exiting
    [junit] 2009-04-17 19:16:35,390 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 54034: exiting
    [junit] 2009-04-17 19:16:35,390 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 54034: exiting
    [junit] 2009-04-17 19:16:35,391 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-17 19:16:35,391 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 54034: exiting
    [junit] 2009-04-17 19:16:35,391 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 54034
    [junit] 2009-04-17 19:16:35,391 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 54034: exiting
    [junit] 2009-04-17 19:16:35,391 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 54034: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.994 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.093 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-17 19:16:36,350 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-17 19:16:36,363 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: 7051833450906176865(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: 6202609412681042081(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 217/3(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -2489818280100611772(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: -3518012158898894371(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: 7442652760225698008(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 49/219(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -5501590211969917853(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.097 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-17 19:16:38,231 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-04-17 19:16:38,737 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 22042
    [junit] 2009-04-17 19:16:38,786 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 22042 22044 22045 ]
    [junit] 2009-04-17 19:16:45,320 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 22058 22042 22056 22044 22046 22050 22048 22054 22052 ]
    [junit] 2009-04-17 19:16:45,333 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 22042 with SIGTERM. Exit code 0
    [junit] 2009-04-17 19:16:45,333 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-17 19:16:45,334 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-04-17 19:16:45,417 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.275 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-17 19:16:46,334 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.616 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 168 minutes 47 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-trunk #809

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/809/changes

Changes:

[gkesavan] to ignore external reference while doing svn stat

[cdouglas] HADOOP-5671. Fix FNF exceptions when copying from old versions of
HftpFileSystem. Contributed by Tsz Wo (Nicholas), SZE

------------------------------------------
[...truncated 472300 lines...]
    [junit] 2009-04-16 16:41:55,570 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 34757
    [junit] 2009-04-16 16:41:55,570 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-16 16:41:55,571 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239912299571 with interval 21600000
    [junit] 2009-04-16 16:41:55,572 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 45619
    [junit] 2009-04-16 16:41:55,573 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-16 16:41:55,635 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:45619
    [junit] 2009-04-16 16:41:55,636 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-16 16:41:55,637 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=39026
    [junit] 2009-04-16 16:41:55,638 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-16 16:41:55,638 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 39026: starting
    [junit] 2009-04-16 16:41:55,638 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 39026: starting
    [junit] 2009-04-16 16:41:55,639 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 39026: starting
    [junit] 2009-04-16 16:41:55,639 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:34757, storageID=, infoPort=45619, ipcPort=39026)
    [junit] 2009-04-16 16:41:55,639 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 39026: starting
    [junit] 2009-04-16 16:41:55,642 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34757 storage DS-202014945-67.195.138.9-34757-1239900115640
    [junit] 2009-04-16 16:41:55,642 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:34757
    [junit] 2009-04-16 16:41:55,645 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-202014945-67.195.138.9-34757-1239900115640 is assigned to data-node 127.0.0.1:34757
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-04-16 16:41:55,645 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:34757, storageID=DS-202014945-67.195.138.9-34757-1239900115640, infoPort=45619, ipcPort=39026)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-16 16:41:55,646 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-16 16:41:55,650 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-04-16 16:41:55,674 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-16 16:41:55,700 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-16 16:41:55,701 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-16 16:41:55,703 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-04-16 16:41:55,704 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-16 16:41:55,760 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-16 16:41:55,761 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 44866
    [junit] 2009-04-16 16:41:55,761 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-16 16:41:55,762 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239919522762 with interval 21600000
    [junit] 2009-04-16 16:41:55,763 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 48206
    [junit] 2009-04-16 16:41:55,764 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-16 16:41:55,826 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:48206
    [junit] 2009-04-16 16:41:55,827 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-16 16:41:55,828 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36362
    [junit] 2009-04-16 16:41:55,829 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-16 16:41:55,829 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 36362: starting
    [junit] 2009-04-16 16:41:55,829 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 36362: starting
    [junit] 2009-04-16 16:41:55,830 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 36362: starting
    [junit] 2009-04-16 16:41:55,830 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:44866, storageID=, infoPort=48206, ipcPort=36362)
    [junit] 2009-04-16 16:41:55,830 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 36362: starting
    [junit] 2009-04-16 16:41:55,832 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:44866 storage DS-1255121694-67.195.138.9-44866-1239900115831
    [junit] 2009-04-16 16:41:55,833 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:44866
    [junit] 2009-04-16 16:41:55,836 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1255121694-67.195.138.9-44866-1239900115831 is assigned to data-node 127.0.0.1:44866
    [junit] 2009-04-16 16:41:55,836 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:44866, storageID=DS-1255121694-67.195.138.9-44866-1239900115831, infoPort=48206, ipcPort=36362)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-16 16:41:55,850 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-16 16:41:55,885 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 2 msecs
    [junit] 2009-04-16 16:41:55,885 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 16:41:55,885 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-16 16:41:55,886 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 16:41:55,892 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 16:41:55,893 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-16 16:41:55,896 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_-1495843911417557730_1001
    [junit] 2009-04-16 16:41:55,898 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-1495843911417557730_1001 src: /127.0.0.1:51431 dest: /127.0.0.1:44866
    [junit] 2009-04-16 16:41:55,899 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-1495843911417557730_1001 src: /127.0.0.1:47904 dest: /127.0.0.1:34757
    [junit] 2009-04-16 16:41:55,902 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:47904, dest: /127.0.0.1:34757, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1306349718, offset: 0, srvID: DS-202014945-67.195.138.9-34757-1239900115640, blockid: blk_-1495843911417557730_1001
    [junit] 2009-04-16 16:41:55,902 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-1495843911417557730_1001 terminating
    [junit] 2009-04-16 16:41:55,903 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:34757 is added to blk_-1495843911417557730_1001 size 4096
    [junit] 2009-04-16 16:41:55,904 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:51431, dest: /127.0.0.1:44866, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1306349718, offset: 0, srvID: DS-1255121694-67.195.138.9-44866-1239900115831, blockid: blk_-1495843911417557730_1001
    [junit] 2009-04-16 16:41:55,905 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-1495843911417557730_1001 terminating
    [junit] 2009-04-16 16:41:55,905 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:44866 is added to blk_-1495843911417557730_1001 size 4096
    [junit] 2009-04-16 16:41:55,908 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_7673617674655465072_1001
    [junit] 2009-04-16 16:41:55,909 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_7673617674655465072_1001 src: /127.0.0.1:51433 dest: /127.0.0.1:44866
    [junit] 2009-04-16 16:41:55,911 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_7673617674655465072_1001 src: /127.0.0.1:47906 dest: /127.0.0.1:34757
    [junit] 2009-04-16 16:41:55,913 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:47906, dest: /127.0.0.1:34757, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1306349718, offset: 0, srvID: DS-202014945-67.195.138.9-34757-1239900115640, blockid: blk_7673617674655465072_1001
    [junit] 2009-04-16 16:41:55,914 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_7673617674655465072_1001 terminating
    [junit] 2009-04-16 16:41:55,915 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:51433, dest: /127.0.0.1:44866, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1306349718, offset: 0, srvID: DS-1255121694-67.195.138.9-44866-1239900115831, blockid: blk_7673617674655465072_1001
    [junit] 2009-04-16 16:41:55,915 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:34757 is added to blk_7673617674655465072_1001 size 4096
    [junit] 2009-04-16 16:41:55,916 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_7673617674655465072_1001 terminating
    [junit] 2009-04-16 16:41:55,918 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:44866 is added to blk_7673617674655465072_1001 size 4096
    [junit] 2009-04-16 16:41:55,918 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 16:41:55,926 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-859554033
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId661941702
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1244911132
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2010014075
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort36362
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort39026
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-16 16:41:56,030 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 36362
    [junit] 2009-04-16 16:41:56,030 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 36362: exiting
    [junit] 2009-04-16 16:41:56,031 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 36362: exiting
    [junit] 2009-04-16 16:41:56,031 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-16 16:41:56,031 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:44866, storageID=DS-1255121694-67.195.138.9-44866-1239900115831, infoPort=48206, ipcPort=36362):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-16 16:41:56,031 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36362
    [junit] 2009-04-16 16:41:56,031 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-16 16:41:56,031 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 36362: exiting
    [junit] 2009-04-16 16:41:56,033 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-16 16:41:56,033 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:44866, storageID=DS-1255121694-67.195.138.9-44866-1239900115831, infoPort=48206, ipcPort=36362):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-16 16:41:56,033 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 36362
    [junit] 2009-04-16 16:41:56,033 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-16 16:41:56,037 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 39026
    [junit] 2009-04-16 16:41:56,037 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 39026: exiting
    [junit] 2009-04-16 16:41:56,038 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 39026: exiting
    [junit] 2009-04-16 16:41:56,038 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:34757, storageID=DS-202014945-67.195.138.9-34757-1239900115640, infoPort=45619, ipcPort=39026):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-16 16:41:56,038 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 39026: exiting
    [junit] 2009-04-16 16:41:56,038 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-16 16:41:56,038 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-16 16:41:56,038 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 39026
    [junit] 2009-04-16 16:41:56,039 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-16 16:41:56,040 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:34757, storageID=DS-202014945-67.195.138.9-34757-1239900115640, infoPort=45619, ipcPort=39026):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-16 16:41:56,040 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 39026
    [junit] 2009-04-16 16:41:56,041 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-16 16:41:56,144 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2359)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-16 16:41:56,144 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 3Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 4 7 
    [junit] 2009-04-16 16:41:56,144 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-16 16:41:56,146 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 16:41:56,146 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 57572
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 57572
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 57572: exiting
    [junit] 2009-04-16 16:41:56,148 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 57572: exiting
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 57572: exiting
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 57572: exiting
    [junit] 2009-04-16 16:41:56,148 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 57572: exiting
    [junit] 2009-04-16 16:41:56,148 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 57572: exiting
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 57572: exiting
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 57572: exiting
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 57572: exiting
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-16 16:41:56,147 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 57572: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.995 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.099 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-16 16:41:57,100 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-16 16:41:57,112 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: 4680639402359298229(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: -3640854098025152805(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 215/342(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: 4670111772686314793(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: -2685125925851692850(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: -7344348011397284459(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 424/275(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: 9061172852525407180(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.015 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-16 16:41:58,941 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-04-16 16:41:59,448 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 16206
    [junit] 2009-04-16 16:41:59,528 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 16206 16208 16209 ]
    [junit] 2009-04-16 16:42:06,088 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 16216 16218 16220 16222 16206 16208 16210 16212 16214 ]
    [junit] 2009-04-16 16:42:06,098 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-16 16:42:06,099 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-04-16 16:42:06,100 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 16206 with SIGTERM. Exit code 0
    [junit] 2009-04-16 16:42:06,148 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.297 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-16 16:42:07,097 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.615 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.186 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 182 minutes 15 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-trunk #808

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/808/changes

Changes:

[cdouglas] HADOOP-5652. Fix a bug where in-memory segments are incorrectly retained in memory.

[cdouglas] HADOOP-5494. Modify sorted map output merger to lazily read values,
rather than buffering at least one record for each segment. Contributed by Devaraj Das.

------------------------------------------
[...truncated 422349 lines...]
    [junit] Starting DataNode 0 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2 
    [junit] 2009-04-16 00:31:45,692 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2009-04-16 00:31:45,692 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-16 00:31:45,696 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2009-04-16 00:31:45,697 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-16 00:31:45,723 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-16 00:31:45,724 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 58652
    [junit] 2009-04-16 00:31:45,724 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-16 00:31:45,725 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239852806725 with interval 21600000
    [junit] 2009-04-16 00:31:45,726 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 41871
    [junit] 2009-04-16 00:31:45,727 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-16 00:31:45,795 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:41871
    [junit] 2009-04-16 00:31:45,796 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-16 00:31:45,797 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=51107
    [junit] 2009-04-16 00:31:45,798 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-16 00:31:45,798 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:58652, storageID=, infoPort=41871, ipcPort=51107)
    [junit] 2009-04-16 00:31:45,798 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 51107: starting
    [junit] 2009-04-16 00:31:45,798 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 51107: starting
    [junit] 2009-04-16 00:31:45,798 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 51107: starting
    [junit] 2009-04-16 00:31:45,798 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 51107: starting
    [junit] 2009-04-16 00:31:45,800 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:58652 storage DS-1049361957-67.195.138.9-58652-1239841905799
    [junit] 2009-04-16 00:31:45,800 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:58652
    [junit] 2009-04-16 00:31:45,803 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1049361957-67.195.138.9-58652-1239841905799 is assigned to data-node 127.0.0.1:58652
    [junit] 2009-04-16 00:31:45,803 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:58652, storageID=DS-1049361957-67.195.138.9-58652-1239841905799, infoPort=41871, ipcPort=51107)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-04-16 00:31:45,804 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-16 00:31:45,813 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-04-16 00:31:45,814 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-16 00:31:45,823 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-04-16 00:31:45,824 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-16 00:31:45,840 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-16 00:31:45,841 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-16 00:31:45,862 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-16 00:31:45,863 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 50214
    [junit] 2009-04-16 00:31:45,863 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-16 00:31:45,863 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239856900863 with interval 21600000
    [junit] 2009-04-16 00:31:45,865 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 52520
    [junit] 2009-04-16 00:31:45,865 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-16 00:31:45,933 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:52520
    [junit] 2009-04-16 00:31:45,934 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-16 00:31:45,936 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=39335
    [junit] 2009-04-16 00:31:45,938 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-16 00:31:45,938 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 39335: starting
    [junit] 2009-04-16 00:31:45,938 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 39335: starting
    [junit] 2009-04-16 00:31:45,938 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 39335: starting
    [junit] 2009-04-16 00:31:45,939 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:50214, storageID=, infoPort=52520, ipcPort=39335)
    [junit] 2009-04-16 00:31:45,939 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 39335: starting
    [junit] 2009-04-16 00:31:45,942 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50214 storage DS-1482098445-67.195.138.9-50214-1239841905941
    [junit] 2009-04-16 00:31:45,942 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:50214
    [junit] 2009-04-16 00:31:45,945 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1482098445-67.195.138.9-50214-1239841905941 is assigned to data-node 127.0.0.1:50214
    [junit] 2009-04-16 00:31:45,945 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:50214, storageID=DS-1482098445-67.195.138.9-50214-1239841905941, infoPort=52520, ipcPort=39335)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-16 00:31:45,945 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-16 00:31:45,969 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 00:31:45,970 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 00:31:45,998 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 00:31:45,999 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-16 00:31:46,012 WARN  namenode.FSNamesystem (ReplicationTargetChooser.java:chooseTarget(184)) - Not able to place enough replicas, still in need of 1
    [junit] 2009-04-16 00:31:46,013 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_-5749979104287390356_1001
    [junit] 2009-04-16 00:31:46,014 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-5749979104287390356_1001 src: /127.0.0.1:49399 dest: /127.0.0.1:58652
    [junit] 2009-04-16 00:31:46,016 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:49399, dest: /127.0.0.1:58652, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_845888485, offset: 0, srvID: DS-1049361957-67.195.138.9-58652-1239841905799, blockid: blk_-5749979104287390356_1001
    [junit] 2009-04-16 00:31:46,017 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-5749979104287390356_1001 terminating
    [junit] 2009-04-16 00:31:46,018 WARN  namenode.FSNamesystem (ReplicationTargetChooser.java:chooseTarget(184)) - Not able to place enough replicas, still in need of 1
    [junit] 2009-04-16 00:31:46,018 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_1711424159161289677_1001
    [junit] 2009-04-16 00:31:46,020 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_1711424159161289677_1001 src: /127.0.0.1:49400 dest: /127.0.0.1:58652
    [junit] 2009-04-16 00:31:46,021 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:58652 is added to blk_-5749979104287390356_1001 size 4096
    [junit] 2009-04-16 00:31:46,021 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:49400, dest: /127.0.0.1:58652, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_845888485, offset: 0, srvID: DS-1049361957-67.195.138.9-58652-1239841905799, blockid: blk_1711424159161289677_1001
    [junit] 2009-04-16 00:31:46,022 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_1711424159161289677_1001 terminating
    [junit] 2009-04-16 00:31:46,023 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:58652 is added to blk_1711424159161289677_1001 size 4096
    [junit] 2009-04-16 00:31:46,023 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 00:31:46,024 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 2 msecs
    [junit] 2009-04-16 00:31:46,025 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-16 00:31:46,029 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-1940080771
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId822741109
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-295578041
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-570316769
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort39335
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort51107
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-16 00:31:46,170 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 39335
    [junit] 2009-04-16 00:31:46,170 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 39335: exiting
    [junit] 2009-04-16 00:31:46,171 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 39335
    [junit] 2009-04-16 00:31:46,171 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-16 00:31:46,171 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:50214, storageID=DS-1482098445-67.195.138.9-50214-1239841905941, infoPort=52520, ipcPort=39335):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-16 00:31:46,171 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 39335: exiting
    [junit] 2009-04-16 00:31:46,171 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 39335: exiting
    [junit] 2009-04-16 00:31:46,172 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-16 00:31:46,172 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-16 00:31:46,173 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:50214, storageID=DS-1482098445-67.195.138.9-50214-1239841905941, infoPort=52520, ipcPort=39335):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-16 00:31:46,173 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 39335
    [junit] 2009-04-16 00:31:46,173 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-16 00:31:46,275 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 51107
    [junit] 2009-04-16 00:31:46,275 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 51107: exiting
    [junit] 2009-04-16 00:31:46,275 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 51107
    [junit] 2009-04-16 00:31:46,275 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:58652, storageID=DS-1049361957-67.195.138.9-58652-1239841905799, infoPort=41871, ipcPort=51107):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-16 00:31:46,275 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-16 00:31:46,276 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 51107: exiting
    [junit] 2009-04-16 00:31:46,276 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 51107: exiting
    [junit] 2009-04-16 00:31:46,277 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-16 00:31:46,277 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-16 00:31:46,277 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:58652, storageID=DS-1049361957-67.195.138.9-58652-1239841905799, infoPort=41871, ipcPort=51107):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-16 00:31:46,278 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 51107
    [junit] 2009-04-16 00:31:46,278 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-16 00:31:46,379 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-16 00:31:46,379 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 22 11 
    [junit] 2009-04-16 00:31:46,380 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2359)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-16 00:31:46,380 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-16 00:31:46,381 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 60071
    [junit] 2009-04-16 00:31:46,381 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 60071: exiting
    [junit] 2009-04-16 00:31:46,381 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 60071: exiting
    [junit] 2009-04-16 00:31:46,383 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 60071
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 60071: exiting
    [junit] 2009-04-16 00:31:46,382 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 60071: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.047 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.095 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-16 00:31:47,228 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-16 00:31:47,241 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.183 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: 6220126095491457729(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: 6879880268530561606(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 127/110(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -5012941041600628027(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: -3620411627411460254(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: -6153794192447161273(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 102/161(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -8717493133462479910(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.004 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-16 00:31:49,054 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-04-16 00:31:49,559 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 10373
    [junit] 2009-04-16 00:31:49,606 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 10376 10375 10373 ]
    [junit] 2009-04-16 00:31:56,144 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 10377 10393 10382 10386 10384 10390 10375 10388 10373 ]
    [junit] 2009-04-16 00:31:56,158 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-16 00:31:56,158 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-04-16 00:31:56,160 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 10373 with SIGTERM. Exit code 0
    [junit] 2009-04-16 00:31:56,211 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.247 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-16 00:31:57,124 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.573 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.188 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.09 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 186 minutes 22 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-trunk #807

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/807/changes

Changes:

[shv] HADOOP-5509. PendingReplicationBlocks does not start monitor in the constructor. Contributed by Konstantin Shvachko.

[hairong] HADOOP-5644. Namenode is stuck in safe mode. Contributed by Suresh Srinivas.

[hairong] HADOOP-5654. TestReplicationPolicy.<init> fails on java.net.BindException. Contributed by Hairong Kuang.

[rangadi] HADOOP-5581. HDFS should throw FileNotFoundException when while opening
a file that does not exist. (Brian Bockelman via rangadi)

------------------------------------------
[...truncated 466133 lines...]
    [junit] 2009-04-15 08:48:42,937 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 58571
    [junit] 2009-04-15 08:48:42,938 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-15 08:48:42,938 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239793416938 with interval 21600000
    [junit] 2009-04-15 08:48:42,940 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 48047
    [junit] 2009-04-15 08:48:42,940 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-15 08:48:43,003 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:48047
    [junit] 2009-04-15 08:48:43,004 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-15 08:48:43,005 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=55779
    [junit] 2009-04-15 08:48:43,006 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-15 08:48:43,006 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 55779: starting
    [junit] 2009-04-15 08:48:43,006 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 55779: starting
    [junit] 2009-04-15 08:48:43,006 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 55779: starting
    [junit] 2009-04-15 08:48:43,006 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:58571, storageID=, infoPort=48047, ipcPort=55779)
    [junit] 2009-04-15 08:48:43,006 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 55779: starting
    [junit] 2009-04-15 08:48:43,009 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:58571 storage DS-1227352605-67.195.138.9-58571-1239785323008
    [junit] 2009-04-15 08:48:43,009 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:58571
    [junit] 2009-04-15 08:48:43,018 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1227352605-67.195.138.9-58571-1239785323008 is assigned to data-node 127.0.0.1:58571
    [junit] 2009-04-15 08:48:43,018 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:58571, storageID=DS-1227352605-67.195.138.9-58571-1239785323008, infoPort=48047, ipcPort=55779)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-04-15 08:48:43,019 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-15 08:48:43,029 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-04-15 08:48:43,029 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-15 08:48:43,033 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-04-15 08:48:43,034 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-15 08:48:43,074 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 2 msecs
    [junit] 2009-04-15 08:48:43,074 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-15 08:48:43,078 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-15 08:48:43,079 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 36698
    [junit] 2009-04-15 08:48:43,079 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-15 08:48:43,079 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239803740079 with interval 21600000
    [junit] 2009-04-15 08:48:43,081 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 54532
    [junit] 2009-04-15 08:48:43,081 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-15 08:48:43,143 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:54532
    [junit] 2009-04-15 08:48:43,145 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-15 08:48:43,146 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=45581
    [junit] 2009-04-15 08:48:43,147 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-15 08:48:43,147 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 45581: starting
    [junit] 2009-04-15 08:48:43,147 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 45581: starting
    [junit] 2009-04-15 08:48:43,148 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 45581: starting
    [junit] 2009-04-15 08:48:43,147 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 45581: starting
    [junit] 2009-04-15 08:48:43,148 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:36698, storageID=, infoPort=54532, ipcPort=45581)
    [junit] 2009-04-15 08:48:43,150 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36698 storage DS-291892191-67.195.138.9-36698-1239785323149
    [junit] 2009-04-15 08:48:43,150 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:36698
    [junit] 2009-04-15 08:48:43,153 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-291892191-67.195.138.9-36698-1239785323149 is assigned to data-node 127.0.0.1:36698
    [junit] 2009-04-15 08:48:43,153 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:36698, storageID=DS-291892191-67.195.138.9-36698-1239785323149, infoPort=54532, ipcPort=45581)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-15 08:48:43,160 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-15 08:48:43,201 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-15 08:48:43,202 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-15 08:48:43,203 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 3 msecs
    [junit] 2009-04-15 08:48:43,204 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-15 08:48:43,219 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-15 08:48:43,220 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-15 08:48:43,225 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_-923968447045708337_1001
    [junit] 2009-04-15 08:48:43,227 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-923968447045708337_1001 src: /127.0.0.1:49733 dest: /127.0.0.1:58571
    [junit] 2009-04-15 08:48:43,228 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-923968447045708337_1001 src: /127.0.0.1:58253 dest: /127.0.0.1:36698
    [junit] 2009-04-15 08:48:43,231 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:58253, dest: /127.0.0.1:36698, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844, offset: 0, srvID: DS-291892191-67.195.138.9-36698-1239785323149, blockid: blk_-923968447045708337_1001
    [junit] 2009-04-15 08:48:43,231 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-923968447045708337_1001 terminating
    [junit] 2009-04-15 08:48:43,272 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36698 is added to blk_-923968447045708337_1001 size 4096
    [junit] 2009-04-15 08:48:43,273 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:49733, dest: /127.0.0.1:58571, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844, offset: 0, srvID: DS-1227352605-67.195.138.9-58571-1239785323008, blockid: blk_-923968447045708337_1001
    [junit] 2009-04-15 08:48:43,274 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-923968447045708337_1001 terminating
    [junit] 2009-04-15 08:48:43,274 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:58571 is added to blk_-923968447045708337_1001 size 4096
    [junit] 2009-04-15 08:48:43,276 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_-3625756261105283729_1001
    [junit] 2009-04-15 08:48:43,278 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-3625756261105283729_1001 src: /127.0.0.1:49735 dest: /127.0.0.1:58571
    [junit] 2009-04-15 08:48:43,279 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-3625756261105283729_1001 src: /127.0.0.1:58255 dest: /127.0.0.1:36698
    [junit] 2009-04-15 08:48:43,283 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:58255, dest: /127.0.0.1:36698, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844, offset: 0, srvID: DS-291892191-67.195.138.9-36698-1239785323149, blockid: blk_-3625756261105283729_1001
    [junit] 2009-04-15 08:48:43,283 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36698 is added to blk_-3625756261105283729_1001 size 4096
    [junit] 2009-04-15 08:48:43,283 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-3625756261105283729_1001 terminating
    [junit] 2009-04-15 08:48:43,285 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:58571 is added to blk_-3625756261105283729_1001 size 4096
    [junit] 2009-04-15 08:48:43,285 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:49735, dest: /127.0.0.1:58571, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_83436844, offset: 0, srvID: DS-1227352605-67.195.138.9-58571-1239785323008, blockid: blk_-3625756261105283729_1001
    [junit] 2009-04-15 08:48:43,286 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-3625756261105283729_1001 terminating
    [junit] 2009-04-15 08:48:43,288 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-15 08:48:43,289 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-46751665
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1676075030
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1536477668
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1168987622
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort45581
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort55779
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-15 08:48:43,392 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 45581
    [junit] 2009-04-15 08:48:43,393 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 45581
    [junit] 2009-04-15 08:48:43,393 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 45581: exiting
    [junit] 2009-04-15 08:48:43,393 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-15 08:48:43,393 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 45581: exiting
    [junit] 2009-04-15 08:48:43,393 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 45581: exiting
    [junit] 2009-04-15 08:48:43,394 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:36698, storageID=DS-291892191-67.195.138.9-36698-1239785323149, infoPort=54532, ipcPort=45581):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-15 08:48:43,394 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-15 08:48:43,395 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-15 08:48:43,395 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:36698, storageID=DS-291892191-67.195.138.9-36698-1239785323149, infoPort=54532, ipcPort=45581):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-15 08:48:43,396 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 45581
    [junit] 2009-04-15 08:48:43,396 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-15 08:48:43,498 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 55779
    [junit] 2009-04-15 08:48:43,499 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 55779: exiting
    [junit] 2009-04-15 08:48:43,499 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 55779: exiting
    [junit] 2009-04-15 08:48:43,500 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:58571, storageID=DS-1227352605-67.195.138.9-58571-1239785323008, infoPort=48047, ipcPort=55779):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-15 08:48:43,499 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-15 08:48:43,499 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 55779: exiting
    [junit] 2009-04-15 08:48:43,499 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-15 08:48:43,499 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 55779
    [junit] 2009-04-15 08:48:43,501 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-15 08:48:43,502 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:58571, storageID=DS-1227352605-67.195.138.9-58571-1239785323008, infoPort=48047, ipcPort=55779):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-15 08:48:43,502 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 55779
    [junit] 2009-04-15 08:48:43,502 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-15 08:48:43,505 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2359)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-15 08:48:43,505 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-15 08:48:43,505 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 16 2 
    [junit] 2009-04-15 08:48:43,506 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-15 08:48:43,507 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 45270
    [junit] 2009-04-15 08:48:43,507 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 45270
    [junit] 2009-04-15 08:48:43,508 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-15 08:48:43,508 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 45270: exiting
    [junit] 2009-04-15 08:48:43,508 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 45270: exiting
    [junit] 2009-04-15 08:48:43,509 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 45270: exiting
    [junit] 2009-04-15 08:48:43,509 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 45270: exiting
    [junit] 2009-04-15 08:48:43,510 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 45270: exiting
    [junit] 2009-04-15 08:48:43,509 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 45270: exiting
    [junit] 2009-04-15 08:48:43,510 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 45270: exiting
    [junit] 2009-04-15 08:48:43,510 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 45270: exiting
    [junit] 2009-04-15 08:48:43,510 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 45270: exiting
    [junit] 2009-04-15 08:48:43,510 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 45270: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.996 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.105 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-15 08:48:44,411 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-15 08:48:44,424 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.185 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: 4570181145371200710(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: 3936155686533720704(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 421/12(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -4256366235203113534(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: -2158977114618471186(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: -4540180234436261788(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 327/197(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -5127121557757729690(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.061 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-15 08:48:46,245 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-04-15 08:48:46,750 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 28711
    [junit] 2009-04-15 08:48:46,801 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 28711 28713 28714 ]
    [junit] 2009-04-15 08:48:53,331 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 28725 28711 28727 28721 28723 28717 28719 28713 28715 ]
    [junit] 2009-04-15 08:48:53,345 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-15 08:48:53,346 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-04-15 08:48:53,346 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 28711 with SIGTERM. Exit code 0
    [junit] 2009-04-15 08:48:53,428 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.277 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-15 08:48:54,409 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.629 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 179 minutes 9 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Build failed in Hudson: Hadoop-trunk #806

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/806/

------------------------------------------
[...truncated 451690 lines...]
    [junit] 2009-04-13 15:11:05,435 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 36979
    [junit] 2009-04-13 15:11:05,436 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-13 15:11:05,436 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239641759436 with interval 21600000
    [junit] 2009-04-13 15:11:05,438 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 40950
    [junit] 2009-04-13 15:11:05,438 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-13 15:11:05,506 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:40950
    [junit] 2009-04-13 15:11:05,507 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-13 15:11:05,508 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=33805
    [junit] 2009-04-13 15:11:05,509 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-13 15:11:05,509 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 33805: starting
    [junit] 2009-04-13 15:11:05,509 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 33805: starting
    [junit] 2009-04-13 15:11:05,509 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 33805: starting
    [junit] 2009-04-13 15:11:05,509 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 33805: starting
    [junit] 2009-04-13 15:11:05,510 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:36979, storageID=, infoPort=40950, ipcPort=33805)
    [junit] 2009-04-13 15:11:05,512 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36979 storage DS-1024826634-67.195.138.9-36979-1239635465511
    [junit] 2009-04-13 15:11:05,512 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:36979
    [junit] 2009-04-13 15:11:05,515 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1024826634-67.195.138.9-36979-1239635465511 is assigned to data-node 127.0.0.1:36979
    [junit] 2009-04-13 15:11:05,515 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:36979, storageID=DS-1024826634-67.195.138.9-36979-1239635465511, infoPort=40950, ipcPort=33805)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-04-13 15:11:05,516 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-13 15:11:05,518 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-04-13 15:11:05,519 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-13 15:11:05,530 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-04-13 15:11:05,530 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-04-13 15:11:05,566 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-04-13 15:11:05,567 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 55551
    [junit] 2009-04-13 15:11:05,567 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-13 15:11:05,568 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1239640782568 with interval 21600000
    [junit] 2009-04-13 15:11:05,570 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 45660
    [junit] 2009-04-13 15:11:05,570 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-13 15:11:05,575 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-13 15:11:05,575 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-13 15:11:05,639 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:45660
    [junit] 2009-04-13 15:11:05,640 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-13 15:11:05,641 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=53696
    [junit] 2009-04-13 15:11:05,642 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-04-13 15:11:05,642 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 53696: starting
    [junit] 2009-04-13 15:11:05,642 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 53696: starting
    [junit] 2009-04-13 15:11:05,643 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 53696: starting
    [junit] 2009-04-13 15:11:05,643 INFO  datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:55551, storageID=, infoPort=45660, ipcPort=53696)
    [junit] 2009-04-13 15:11:05,643 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 53696: starting
    [junit] 2009-04-13 15:11:05,644 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:55551 storage DS-274840152-67.195.138.9-55551-1239635465644
    [junit] 2009-04-13 15:11:05,645 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:55551
    [junit] 2009-04-13 15:11:05,647 INFO  datanode.DataNode (DataNode.java:register(554)) - New storage id DS-274840152-67.195.138.9-55551-1239635465644 is assigned to data-node 127.0.0.1:55551
    [junit] 2009-04-13 15:11:05,648 INFO  datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:55551, storageID=DS-274840152-67.195.138.9-55551-1239635465644, infoPort=45660, ipcPort=53696)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-13 15:11:05,654 INFO  datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-13 15:11:05,679 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 15:11:05,680 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 15:11:05,693 INFO  datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-13 15:11:05,694 INFO  datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner.
    [junit] 2009-04-13 15:11:05,728 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 15:11:05,728 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-13 15:11:05,731 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_6482305749944161008_1001
    [junit] 2009-04-13 15:11:05,733 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_6482305749944161008_1001 src: /127.0.0.1:41081 dest: /127.0.0.1:36979
    [junit] 2009-04-13 15:11:05,734 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_6482305749944161008_1001 src: /127.0.0.1:57734 dest: /127.0.0.1:55551
    [junit] 2009-04-13 15:11:05,736 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:57734, dest: /127.0.0.1:55551, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1657953501, offset: 0, srvID: DS-274840152-67.195.138.9-55551-1239635465644, blockid: blk_6482305749944161008_1001
    [junit] 2009-04-13 15:11:05,737 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_6482305749944161008_1001 terminating
    [junit] 2009-04-13 15:11:05,737 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:55551 is added to blk_6482305749944161008_1001 size 4096
    [junit] 2009-04-13 15:11:05,737 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:41081, dest: /127.0.0.1:36979, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1657953501, offset: 0, srvID: DS-1024826634-67.195.138.9-36979-1239635465511, blockid: blk_6482305749944161008_1001
    [junit] 2009-04-13 15:11:05,738 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36979 is added to blk_6482305749944161008_1001 size 4096
    [junit] 2009-04-13 15:11:05,738 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_6482305749944161008_1001 terminating
    [junit] 2009-04-13 15:11:05,739 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_4263478641753483227_1001
    [junit] 2009-04-13 15:11:05,740 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_4263478641753483227_1001 src: /127.0.0.1:41083 dest: /127.0.0.1:36979
    [junit] 2009-04-13 15:11:05,741 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_4263478641753483227_1001 src: /127.0.0.1:57736 dest: /127.0.0.1:55551
    [junit] 2009-04-13 15:11:05,743 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:57736, dest: /127.0.0.1:55551, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1657953501, offset: 0, srvID: DS-274840152-67.195.138.9-55551-1239635465644, blockid: blk_4263478641753483227_1001
    [junit] 2009-04-13 15:11:05,743 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:55551 is added to blk_4263478641753483227_1001 size 4096
    [junit] 2009-04-13 15:11:05,743 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_4263478641753483227_1001 terminating
    [junit] 2009-04-13 15:11:05,744 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36979 is added to blk_4263478641753483227_1001 size 4096
    [junit] 2009-04-13 15:11:05,744 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:41083, dest: /127.0.0.1:36979, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1657953501, offset: 0, srvID: DS-1024826634-67.195.138.9-36979-1239635465511, blockid: blk_4263478641753483227_1001
    [junit] 2009-04-13 15:11:05,745 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_4263478641753483227_1001 terminating
    [junit] 2009-04-13 15:11:05,746 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 15:11:05,747 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-386755768
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId275283970
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-119683701
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1831038872
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort33805
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort53696
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-13 15:11:05,849 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 53696
    [junit] 2009-04-13 15:11:05,849 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 53696: exiting
    [junit] 2009-04-13 15:11:05,850 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 53696: exiting
    [junit] 2009-04-13 15:11:05,850 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 53696: exiting
    [junit] 2009-04-13 15:11:05,850 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 53696
    [junit] 2009-04-13 15:11:05,850 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-13 15:11:05,850 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:55551, storageID=DS-274840152-67.195.138.9-55551-1239635465644, infoPort=45660, ipcPort=53696):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-13 15:11:05,850 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-13 15:11:05,851 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-13 15:11:05,852 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:55551, storageID=DS-274840152-67.195.138.9-55551-1239635465644, infoPort=45660, ipcPort=53696):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-04-13 15:11:05,852 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 53696
    [junit] 2009-04-13 15:11:05,853 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-13 15:11:05,854 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 33805
    [junit] 2009-04-13 15:11:05,854 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 33805: exiting
    [junit] 2009-04-13 15:11:05,855 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 33805
    [junit] 2009-04-13 15:11:05,855 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:36979, storageID=DS-1024826634-67.195.138.9-36979-1239635465511, infoPort=40950, ipcPort=33805):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-13 15:11:05,855 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-13 15:11:05,855 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 33805: exiting
    [junit] 2009-04-13 15:11:05,855 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 33805: exiting
    [junit] 2009-04-13 15:11:05,857 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
    [junit] 2009-04-13 15:11:05,855 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-13 15:11:05,858 INFO  datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:36979, storageID=DS-1024826634-67.195.138.9-36979-1239635465511, infoPort=40950, ipcPort=33805):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-04-13 15:11:05,858 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 33805
    [junit] 2009-04-13 15:11:05,858 INFO  datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-13 15:11:05,959 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2352)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-13 15:11:05,959 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 34 14 
    [junit] 2009-04-13 15:11:05,960 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-13 15:11:05,960 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); 
    [junit] 2009-04-13 15:11:05,961 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 36513
    [junit] 2009-04-13 15:11:05,961 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36513
    [junit] 2009-04-13 15:11:05,961 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 36513: exiting
    [junit] 2009-04-13 15:11:05,963 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 36513: exiting
    [junit] 2009-04-13 15:11:05,963 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 36513: exiting
    [junit] 2009-04-13 15:11:05,962 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 36513: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.087 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.101 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-13 15:11:06,925 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-13 15:11:06,937 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.185 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: -4332669283686278723(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: -8247351533869240910(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 79/358(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -1699693244825218947(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: 4803907685756422304(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: -9008153530866001712(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 111/142(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -6179356047062485005(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.058 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-13 15:11:08,809 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-04-13 15:11:09,314 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 24638
    [junit] 2009-04-13 15:11:09,384 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 24640 24641 24638 ]
    [junit] 2009-04-13 15:11:15,932 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 24642 24640 24646 24644 24650 24648 24638 24654 24652 ]
    [junit] 2009-04-13 15:11:15,944 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-13 15:11:15,944 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-04-13 15:11:15,946 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 24638 with SIGTERM. Exit code 0
    [junit] 2009-04-13 15:11:16,033 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.328 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-13 15:11:16,974 WARN  conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.581 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.183 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.091 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 184 minutes 13 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...