You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.zones.apache.org> on 2009/04/09 16:49:04 UTC
Build failed in Hudson: Hadoop-trunk #802
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/802/changes
Changes:
[nigel] HADOOPO-5645. After HADOOP-4920 we need a place to checkin releasenotes.html. Contributed by nigel.
[yhemanth] HADOOP-5462. Fixed a double free bug in the task-controller executable. Contributed by Sreekanth Ramakrishnan.
------------------------------------------
[...truncated 353901 lines...]
[junit] 2009-04-09 15:01:20,014 INFO datanode.DataNode (FSDataset.java:registerMBean(1414)) - Registered FSDatasetStatusMBean
[junit] 2009-04-09 15:01:20,015 INFO datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 54469
[junit] 2009-04-09 15:01:20,015 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-04-09 15:01:20,017 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 48661
[junit] 2009-04-09 15:01:20,017 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14
[junit] 2009-04-09 15:01:20,085 INFO mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:48661
[junit] 2009-04-09 15:01:20,086 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-04-09 15:01:20,087 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=45746
[junit] 2009-04-09 15:01:20,087 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
[junit] 2009-04-09 15:01:20,088 INFO datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:54469, storageID=, infoPort=48661, ipcPort=45746)
[junit] 2009-04-09 15:01:20,088 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 45746: starting
[junit] 2009-04-09 15:01:20,088 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 45746: starting
[junit] 2009-04-09 15:01:20,088 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 45746: starting
[junit] 2009-04-09 15:01:20,088 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 45746: starting
[junit] 2009-04-09 15:01:20,090 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:54469 storage DS-169366490-67.195.138.9-54469-1239289280089
[junit] 2009-04-09 15:01:20,090 INFO net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:54469
[junit] 2009-04-09 15:01:20,093 INFO datanode.DataNode (DataNode.java:register(554)) - New storage id DS-169366490-67.195.138.9-54469-1239289280089 is assigned to data-node 127.0.0.1:54469
[junit] 2009-04-09 15:01:20,093 INFO datanode.DataNode (DataNode.java:run(1196)) - DatanodeRegistration(127.0.0.1:54469, storageID=DS-169366490-67.195.138.9-54469-1239289280089, infoPort=48661, ipcPort=45746)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4
[junit] 2009-04-09 15:01:20,094 INFO datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
[junit] 2009-04-09 15:01:20,103 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3 is not formatted.
[junit] 2009-04-09 15:01:20,103 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-04-09 15:01:20,108 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4 is not formatted.
[junit] 2009-04-09 15:01:20,111 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-04-09 15:01:20,130 INFO datanode.DataNode (DataNode.java:offerService(778)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-04-09 15:01:20,131 INFO datanode.DataNode (DataNode.java:offerService(803)) - Starting Periodic block scanner.
[junit] 2009-04-09 15:01:20,143 INFO datanode.DataNode (FSDataset.java:registerMBean(1414)) - Registered FSDatasetStatusMBean
[junit] 2009-04-09 15:01:20,144 INFO datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 34154
[junit] 2009-04-09 15:01:20,144 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-04-09 15:01:20,146 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 35240
[junit] 2009-04-09 15:01:20,146 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14
[junit] 2009-04-09 15:01:20,213 INFO mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:35240
[junit] 2009-04-09 15:01:20,214 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-04-09 15:01:20,215 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36643
[junit] 2009-04-09 15:01:20,216 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
[junit] 2009-04-09 15:01:20,216 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 36643: starting
[junit] 2009-04-09 15:01:20,216 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 36643: starting
[junit] 2009-04-09 15:01:20,216 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 36643: starting
[junit] 2009-04-09 15:01:20,217 INFO datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:34154, storageID=, infoPort=35240, ipcPort=36643)
[junit] 2009-04-09 15:01:20,217 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 36643: starting
[junit] 2009-04-09 15:01:20,218 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34154 storage DS-1336559911-67.195.138.9-34154-1239289280217
[junit] 2009-04-09 15:01:20,219 INFO net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:34154
[junit] 2009-04-09 15:01:20,221 INFO datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1336559911-67.195.138.9-34154-1239289280217 is assigned to data-node 127.0.0.1:34154
[junit] 2009-04-09 15:01:20,222 INFO datanode.DataNode (DataNode.java:run(1196)) - DatanodeRegistration(127.0.0.1:34154, storageID=DS-1336559911-67.195.138.9-34154-1239289280217, infoPort=35240, ipcPort=36643)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-04-09 15:01:20,229 INFO datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
[junit] 2009-04-09 15:01:20,253 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-09 15:01:20,254 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-09 15:01:20,261 INFO datanode.DataNode (DataNode.java:offerService(778)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-04-09 15:01:20,262 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-09 15:01:20,263 INFO datanode.DataNode (DataNode.java:offerService(803)) - Starting Periodic block scanner.
[junit] 2009-04-09 15:01:20,264 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/test dst=null perm=hudson:supergroup:rw-r--r--
[junit] 2009-04-09 15:01:20,267 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_-1861198071259255944_1001
[junit] 2009-04-09 15:01:20,269 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-1861198071259255944_1001 src: /127.0.0.1:33146 dest: /127.0.0.1:54469
[junit] 2009-04-09 15:01:20,270 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-1861198071259255944_1001 src: /127.0.0.1:54965 dest: /127.0.0.1:34154
[junit] 2009-04-09 15:01:20,272 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:54965, dest: /127.0.0.1:34154, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1201655178, offset: 0, srvID: DS-1336559911-67.195.138.9-34154-1239289280217, blockid: blk_-1861198071259255944_1001
[junit] 2009-04-09 15:01:20,272 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-1861198071259255944_1001 terminating
[junit] 2009-04-09 15:01:20,273 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:34154 is added to blk_-1861198071259255944_1001 size 4096
[junit] 2009-04-09 15:01:20,273 INFO DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:33146, dest: /127.0.0.1:54469, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1201655178, offset: 0, srvID: DS-169366490-67.195.138.9-54469-1239289280089, blockid: blk_-1861198071259255944_1001
[junit] 2009-04-09 15:01:20,274 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54469 is added to blk_-1861198071259255944_1001 size 4096
[junit] 2009-04-09 15:01:20,274 INFO datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-1861198071259255944_1001 terminating
[junit] 2009-04-09 15:01:20,275 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_152240167602231045_1001
[junit] 2009-04-09 15:01:20,276 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_152240167602231045_1001 src: /127.0.0.1:33148 dest: /127.0.0.1:54469
[junit] 2009-04-09 15:01:20,277 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_152240167602231045_1001 src: /127.0.0.1:54967 dest: /127.0.0.1:34154
[junit] 2009-04-09 15:01:20,279 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:54967, dest: /127.0.0.1:34154, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1201655178, offset: 0, srvID: DS-1336559911-67.195.138.9-34154-1239289280217, blockid: blk_152240167602231045_1001
[junit] 2009-04-09 15:01:20,279 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_152240167602231045_1001 terminating
[junit] 2009-04-09 15:01:20,280 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:34154 is added to blk_152240167602231045_1001 size 4096
[junit] 2009-04-09 15:01:20,280 INFO DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:33148, dest: /127.0.0.1:54469, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1201655178, offset: 0, srvID: DS-169366490-67.195.138.9-54469-1239289280089, blockid: blk_152240167602231045_1001
[junit] 2009-04-09 15:01:20,281 INFO datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_152240167602231045_1001 terminating
[junit] 2009-04-09 15:01:20,281 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54469 is added to blk_152240167602231045_1001 size 4096
[junit] 2009-04-09 15:01:20,283 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-09 15:01:20,283 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
[junit]
[junit] Domains:
[junit] Domain = JMImplementation
[junit] Domain = com.sun.management
[junit] Domain = hadoop
[junit] Domain = java.lang
[junit] Domain = java.util.logging
[junit]
[junit] MBeanServer default domain = DefaultDomain
[junit]
[junit] MBean count = 26
[junit]
[junit] Query MBeanServer MBeans:
[junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-572218472
[junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId725167550
[junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1971668979
[junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId287793679
[junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort36643
[junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort45746
[junit] Info: key = bytes_written; val = 0
[junit] Shutting down the Mini HDFS Cluster
[junit] Shutting down DataNode 1
[junit] 2009-04-09 15:01:20,386 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 36643
[junit] 2009-04-09 15:01:20,386 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 36643: exiting
[junit] 2009-04-09 15:01:20,387 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 36643: exiting
[junit] 2009-04-09 15:01:20,387 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 36643: exiting
[junit] 2009-04-09 15:01:20,387 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36643
[junit] 2009-04-09 15:01:20,387 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-04-09 15:01:20,388 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:34154, storageID=DS-1336559911-67.195.138.9-34154-1239289280217, infoPort=35240, ipcPort=36643):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-04-09 15:01:20,387 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-04-09 15:01:20,388 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(603)) - Exiting DataBlockScanner thread.
[junit] 2009-04-09 15:01:20,389 INFO datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:34154, storageID=DS-1336559911-67.195.138.9-34154-1239289280217, infoPort=35240, ipcPort=36643):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-04-09 15:01:20,389 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 36643
[junit] 2009-04-09 15:01:20,389 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 0
[junit] 2009-04-09 15:01:20,490 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 45746
[junit] 2009-04-09 15:01:20,491 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 45746: exiting
[junit] 2009-04-09 15:01:20,491 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 45746: exiting
[junit] 2009-04-09 15:01:20,491 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 45746
[junit] 2009-04-09 15:01:20,491 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-04-09 15:01:20,491 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-04-09 15:01:20,492 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:54469, storageID=DS-169366490-67.195.138.9-54469-1239289280089, infoPort=48661, ipcPort=45746):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-04-09 15:01:20,492 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 45746: exiting
[junit] 2009-04-09 15:01:21,131 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(603)) - Exiting DataBlockScanner thread.
[junit] 2009-04-09 15:01:21,492 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-04-09 15:01:21,493 INFO datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:54469, storageID=DS-169366490-67.195.138.9-54469-1239289280089, infoPort=48661, ipcPort=45746):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] 2009-04-09 15:01:21,493 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 45746
[junit] 2009-04-09 15:01:21,493 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-04-09 15:01:21,594 WARN namenode.FSNamesystem (FSNamesystem.java:run(2352)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2009-04-09 15:01:21,595 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 8 0
[junit] 2009-04-09 15:01:21,595 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2009-04-09 15:01:21,596 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-09 15:01:21,596 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 58925
[junit] 2009-04-09 15:01:21,596 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-04-09 15:01:21,596 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 58925: exiting
[junit] 2009-04-09 15:01:21,596 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 58925
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 58925: exiting
[junit] 2009-04-09 15:01:21,597 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 58925: exiting
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.123 sec
[junit] Running org.apache.hadoop.util.TestCyclicIteration
[junit]
[junit]
[junit] integers=[]
[junit] map={}
[junit] start=-1, iteration=[]
[junit]
[junit]
[junit] integers=[0]
[junit] map={0=0}
[junit] start=-1, iteration=[0]
[junit] start=0, iteration=[0]
[junit] start=1, iteration=[0]
[junit]
[junit]
[junit] integers=[0, 2]
[junit] map={0=0, 2=2}
[junit] start=-1, iteration=[0, 2]
[junit] start=0, iteration=[2, 0]
[junit] start=1, iteration=[2, 0]
[junit] start=2, iteration=[0, 2]
[junit] start=3, iteration=[0, 2]
[junit]
[junit]
[junit] integers=[0, 2, 4]
[junit] map={0=0, 2=2, 4=4}
[junit] start=-1, iteration=[0, 2, 4]
[junit] start=0, iteration=[2, 4, 0]
[junit] start=1, iteration=[2, 4, 0]
[junit] start=2, iteration=[4, 0, 2]
[junit] start=3, iteration=[4, 0, 2]
[junit] start=4, iteration=[0, 2, 4]
[junit] start=5, iteration=[0, 2, 4]
[junit]
[junit]
[junit] integers=[0, 2, 4, 6]
[junit] map={0=0, 2=2, 4=4, 6=6}
[junit] start=-1, iteration=[0, 2, 4, 6]
[junit] start=0, iteration=[2, 4, 6, 0]
[junit] start=1, iteration=[2, 4, 6, 0]
[junit] start=2, iteration=[4, 6, 0, 2]
[junit] start=3, iteration=[4, 6, 0, 2]
[junit] start=4, iteration=[6, 0, 2, 4]
[junit] start=5, iteration=[6, 0, 2, 4]
[junit] start=6, iteration=[0, 2, 4, 6]
[junit] start=7, iteration=[0, 2, 4, 6]
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.095 sec
[junit] Running org.apache.hadoop.util.TestGenericsUtil
[junit] 2009-04-09 15:01:22,539 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
[junit] 2009-04-09 15:01:22,551 WARN util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
[junit] usage: general options are:
[junit] -archives <paths> comma separated archives to be unarchived
[junit] on the compute machines.
[junit] -conf <configuration file> specify an application configuration file
[junit] -D <property=value> use value for given property
[junit] -files <paths> comma separated files to be copied to the
[junit] map reduce cluster
[junit] -fs <local|namenode:port> specify a namenode
[junit] -jt <local|jobtracker:port> specify a job tracker
[junit] -libjars <paths> comma separated jar files to include in the
[junit] classpath.
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.186 sec
[junit] Running org.apache.hadoop.util.TestIndexedSort
[junit] sortRandom seed: 2065569555827122682(org.apache.hadoop.util.QuickSort)
[junit] testSorted seed: 8876351528918944772(org.apache.hadoop.util.QuickSort)
[junit] testAllEqual setting min/max at 397/8(org.apache.hadoop.util.QuickSort)
[junit] sortWritable seed: 8176784154270274857(org.apache.hadoop.util.QuickSort)
[junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
[junit] sortRandom seed: 4949286938396782865(org.apache.hadoop.util.HeapSort)
[junit] testSorted seed: -4643464972130608301(org.apache.hadoop.util.HeapSort)
[junit] testAllEqual setting min/max at 4/78(org.apache.hadoop.util.HeapSort)
[junit] sortWritable seed: 2599661307970499793(org.apache.hadoop.util.HeapSort)
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.013 sec
[junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
[junit] 2009-04-09 15:01:24,366 INFO util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
[junit] 2009-04-09 15:01:24,872 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 28774
[junit] 2009-04-09 15:01:24,936 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 28774 28777 28776 ]
[junit] 2009-04-09 15:01:31,467 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 28784 28786 28788 28790 28774 28792 28776 28778 28780 ]
[junit] 2009-04-09 15:01:31,480 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException:
[junit] 2009-04-09 15:01:31,480 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
[junit] 2009-04-09 15:01:31,481 INFO util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 28774 with SIGTERM. Exit code 0
[junit] 2009-04-09 15:01:31,535 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.266 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] 2009-04-09 15:01:32,488 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.586 sec
[junit] Running org.apache.hadoop.util.TestShell
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.186 sec
[junit] Running org.apache.hadoop.util.TestStringUtils
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec
BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!
Total time: 144 minutes 36 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...
Build failed in Hudson: Hadoop-trunk #803
Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/803/changes
Changes:
[szetszwo] Fix CHANGES.txt.
------------------------------------------
[...truncated 350699 lines...]
[junit] 2009-04-10 16:09:39,473 INFO datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 56424
[junit] 2009-04-10 16:09:39,473 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-04-10 16:09:39,475 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 52708
[junit] 2009-04-10 16:09:39,475 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14
[junit] 2009-04-10 16:09:39,544 INFO mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:52708
[junit] 2009-04-10 16:09:39,544 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-04-10 16:09:39,546 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=49666
[junit] 2009-04-10 16:09:39,546 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
[junit] 2009-04-10 16:09:39,547 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 49666: starting
[junit] 2009-04-10 16:09:39,547 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 49666: starting
[junit] 2009-04-10 16:09:39,547 INFO datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:56424, storageID=, infoPort=52708, ipcPort=49666)
[junit] 2009-04-10 16:09:39,547 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 49666: starting
[junit] 2009-04-10 16:09:39,549 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 49666: starting
[junit] 2009-04-10 16:09:39,549 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:56424 storage DS-291053678-67.195.138.9-56424-1239379779548
[junit] 2009-04-10 16:09:39,550 INFO net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:56424
[junit] 2009-04-10 16:09:39,552 INFO datanode.DataNode (DataNode.java:register(554)) - New storage id DS-291053678-67.195.138.9-56424-1239379779548 is assigned to data-node 127.0.0.1:56424
[junit] 2009-04-10 16:09:39,552 INFO datanode.DataNode (DataNode.java:run(1196)) - DatanodeRegistration(127.0.0.1:56424, storageID=DS-291053678-67.195.138.9-56424-1239379779548, infoPort=52708, ipcPort=49666)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4
[junit] 2009-04-10 16:09:39,554 INFO datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
[junit] 2009-04-10 16:09:39,562 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3 is not formatted.
[junit] 2009-04-10 16:09:39,563 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-04-10 16:09:39,567 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4 is not formatted.
[junit] 2009-04-10 16:09:39,567 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-04-10 16:09:39,590 INFO datanode.DataNode (DataNode.java:offerService(778)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-04-10 16:09:39,591 INFO datanode.DataNode (DataNode.java:offerService(803)) - Starting Periodic block scanner.
[junit] 2009-04-10 16:09:39,602 INFO datanode.DataNode (FSDataset.java:registerMBean(1414)) - Registered FSDatasetStatusMBean
[junit] 2009-04-10 16:09:39,603 INFO datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 37771
[junit] 2009-04-10 16:09:39,604 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-04-10 16:09:39,606 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 50434
[junit] 2009-04-10 16:09:39,607 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14
[junit] 2009-04-10 16:09:39,675 INFO mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:50434
[junit] 2009-04-10 16:09:39,676 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-04-10 16:09:39,677 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=52932
[junit] 2009-04-10 16:09:39,678 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
[junit] 2009-04-10 16:09:39,679 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 52932: starting
[junit] 2009-04-10 16:09:39,679 INFO datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:37771, storageID=, infoPort=50434, ipcPort=52932)
[junit] 2009-04-10 16:09:39,678 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 52932: starting
[junit] 2009-04-10 16:09:39,678 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 52932: starting
[junit] 2009-04-10 16:09:39,678 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 52932: starting
[junit] 2009-04-10 16:09:39,682 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2077)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:37771 storage DS-106845047-67.195.138.9-37771-1239379779680
[junit] 2009-04-10 16:09:39,682 INFO net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:37771
[junit] 2009-04-10 16:09:39,685 INFO datanode.DataNode (DataNode.java:register(554)) - New storage id DS-106845047-67.195.138.9-37771-1239379779680 is assigned to data-node 127.0.0.1:37771
[junit] 2009-04-10 16:09:39,685 INFO datanode.DataNode (DataNode.java:run(1196)) - DatanodeRegistration(127.0.0.1:37771, storageID=DS-106845047-67.195.138.9-37771-1239379779680, infoPort=50434, ipcPort=52932)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-04-10 16:09:39,690 INFO datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
[junit] 2009-04-10 16:09:39,721 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-10 16:09:39,722 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-10 16:09:39,726 INFO datanode.DataNode (DataNode.java:offerService(778)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-04-10 16:09:39,726 INFO datanode.DataNode (DataNode.java:offerService(803)) - Starting Periodic block scanner.
[junit] 2009-04-10 16:09:39,740 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-10 16:09:39,741 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/test dst=null perm=hudson:supergroup:rw-r--r--
[junit] 2009-04-10 16:09:39,743 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_-1167952298056785757_1001
[junit] 2009-04-10 16:09:39,745 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-1167952298056785757_1001 src: /127.0.0.1:37695 dest: /127.0.0.1:37771
[junit] 2009-04-10 16:09:39,747 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-1167952298056785757_1001 src: /127.0.0.1:56688 dest: /127.0.0.1:56424
[junit] 2009-04-10 16:09:39,749 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:56688, dest: /127.0.0.1:56424, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489, offset: 0, srvID: DS-291053678-67.195.138.9-56424-1239379779548, blockid: blk_-1167952298056785757_1001
[junit] 2009-04-10 16:09:39,750 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56424 is added to blk_-1167952298056785757_1001 size 4096
[junit] 2009-04-10 16:09:39,750 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-1167952298056785757_1001 terminating
[junit] 2009-04-10 16:09:39,751 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:37771 is added to blk_-1167952298056785757_1001 size 4096
[junit] 2009-04-10 16:09:39,750 INFO DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:37695, dest: /127.0.0.1:37771, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489, offset: 0, srvID: DS-106845047-67.195.138.9-37771-1239379779680, blockid: blk_-1167952298056785757_1001
[junit] 2009-04-10 16:09:39,752 INFO datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-1167952298056785757_1001 terminating
[junit] 2009-04-10 16:09:39,753 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1474)) - BLOCK* NameSystem.allocateBlock: /test. blk_-7986410566441370361_1001
[junit] 2009-04-10 16:09:39,754 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-7986410566441370361_1001 src: /127.0.0.1:37697 dest: /127.0.0.1:37771
[junit] 2009-04-10 16:09:39,755 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-7986410566441370361_1001 src: /127.0.0.1:56690 dest: /127.0.0.1:56424
[junit] 2009-04-10 16:09:39,757 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:56690, dest: /127.0.0.1:56424, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489, offset: 0, srvID: DS-291053678-67.195.138.9-56424-1239379779548, blockid: blk_-7986410566441370361_1001
[junit] 2009-04-10 16:09:39,757 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-7986410566441370361_1001 terminating
[junit] 2009-04-10 16:09:39,758 INFO DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:37697, dest: /127.0.0.1:37771, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489, offset: 0, srvID: DS-106845047-67.195.138.9-37771-1239379779680, blockid: blk_-7986410566441370361_1001
[junit] 2009-04-10 16:09:39,758 INFO datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-7986410566441370361_1001 terminating
[junit] 2009-04-10 16:09:39,759 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56424 is added to blk_-7986410566441370361_1001 size 4096
[junit] 2009-04-10 16:09:39,760 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-10 16:09:39,761 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:37771 is added to blk_-7986410566441370361_1001 size 4096
[junit] 2009-04-10 16:09:39,761 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
[junit]
[junit] Domains:
[junit] Domain = JMImplementation
[junit] Domain = com.sun.management
[junit] Domain = hadoop
[junit] Domain = java.lang
[junit] Domain = java.util.logging
[junit]
[junit] MBeanServer default domain = DefaultDomain
[junit]
[junit] MBean count = 26
[junit]
[junit] Query MBeanServer MBeans:
[junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1582077997
[junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId2033102362
[junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2079230896
[junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId916916940
[junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort49666
[junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort52932
[junit] Info: key = bytes_written; val = 0
[junit] Shutting down the Mini HDFS Cluster
[junit] Shutting down DataNode 1
[junit] 2009-04-10 16:09:39,864 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 52932
[junit] 2009-04-10 16:09:39,865 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 52932: exiting
[junit] 2009-04-10 16:09:39,865 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-04-10 16:09:39,865 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-04-10 16:09:39,865 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 52932: exiting
[junit] 2009-04-10 16:09:39,865 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 52932
[junit] 2009-04-10 16:09:39,865 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 52932: exiting
[junit] 2009-04-10 16:09:39,865 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:37771, storageID=DS-106845047-67.195.138.9-37771-1239379779680, infoPort=50434, ipcPort=52932):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-04-10 16:09:40,727 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(603)) - Exiting DataBlockScanner thread.
[junit] 2009-04-10 16:09:40,865 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-04-10 16:09:40,866 INFO datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:37771, storageID=DS-106845047-67.195.138.9-37771-1239379779680, infoPort=50434, ipcPort=52932):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-04-10 16:09:40,867 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 52932
[junit] 2009-04-10 16:09:40,867 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 0
[junit] 2009-04-10 16:09:40,968 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 49666
[junit] 2009-04-10 16:09:40,968 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 49666: exiting
[junit] 2009-04-10 16:09:40,969 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 49666
[junit] 2009-04-10 16:09:40,969 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-04-10 16:09:40,969 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:56424, storageID=DS-291053678-67.195.138.9-56424-1239379779548, infoPort=52708, ipcPort=49666):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-04-10 16:09:40,969 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 49666: exiting
[junit] 2009-04-10 16:09:40,969 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 49666: exiting
[junit] 2009-04-10 16:09:40,969 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-04-10 16:09:41,602 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(603)) - Exiting DataBlockScanner thread.
[junit] 2009-04-10 16:09:41,969 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-04-10 16:09:41,970 INFO datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:56424, storageID=DS-291053678-67.195.138.9-56424-1239379779548, infoPort=52708, ipcPort=49666):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] 2009-04-10 16:09:41,970 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 49666
[junit] 2009-04-10 16:09:41,970 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-04-10 16:09:42,072 WARN namenode.FSNamesystem (FSNamesystem.java:run(2352)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2009-04-10 16:09:42,072 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 17 1
[junit] 2009-04-10 16:09:42,072 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2009-04-10 16:09:42,073 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-04-10 16:09:42,073 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 57775
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 57775
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 57775: exiting
[junit] 2009-04-10 16:09:42,074 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 57775: exiting
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 6.052 sec
[junit] Running org.apache.hadoop.util.TestCyclicIteration
[junit]
[junit]
[junit] integers=[]
[junit] map={}
[junit] start=-1, iteration=[]
[junit]
[junit]
[junit] integers=[0]
[junit] map={0=0}
[junit] start=-1, iteration=[0]
[junit] start=0, iteration=[0]
[junit] start=1, iteration=[0]
[junit]
[junit]
[junit] integers=[0, 2]
[junit] map={0=0, 2=2}
[junit] start=-1, iteration=[0, 2]
[junit] start=0, iteration=[2, 0]
[junit] start=1, iteration=[2, 0]
[junit] start=2, iteration=[0, 2]
[junit] start=3, iteration=[0, 2]
[junit]
[junit]
[junit] integers=[0, 2, 4]
[junit] map={0=0, 2=2, 4=4}
[junit] start=-1, iteration=[0, 2, 4]
[junit] start=0, iteration=[2, 4, 0]
[junit] start=1, iteration=[2, 4, 0]
[junit] start=2, iteration=[4, 0, 2]
[junit] start=3, iteration=[4, 0, 2]
[junit] start=4, iteration=[0, 2, 4]
[junit] start=5, iteration=[0, 2, 4]
[junit]
[junit]
[junit] integers=[0, 2, 4, 6]
[junit] map={0=0, 2=2, 4=4, 6=6}
[junit] start=-1, iteration=[0, 2, 4, 6]
[junit] start=0, iteration=[2, 4, 6, 0]
[junit] start=1, iteration=[2, 4, 6, 0]
[junit] start=2, iteration=[4, 6, 0, 2]
[junit] start=3, iteration=[4, 6, 0, 2]
[junit] start=4, iteration=[6, 0, 2, 4]
[junit] start=5, iteration=[6, 0, 2, 4]
[junit] start=6, iteration=[0, 2, 4, 6]
[junit] start=7, iteration=[0, 2, 4, 6]
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.094 sec
[junit] Running org.apache.hadoop.util.TestGenericsUtil
[junit] 2009-04-10 16:09:43,048 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
[junit] 2009-04-10 16:09:43,061 WARN util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
[junit] usage: general options are:
[junit] -archives <paths> comma separated archives to be unarchived
[junit] on the compute machines.
[junit] -conf <configuration file> specify an application configuration file
[junit] -D <property=value> use value for given property
[junit] -files <paths> comma separated files to be copied to the
[junit] map reduce cluster
[junit] -fs <local|namenode:port> specify a namenode
[junit] -jt <local|jobtracker:port> specify a job tracker
[junit] -libjars <paths> comma separated jar files to include in the
[junit] classpath.
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.187 sec
[junit] Running org.apache.hadoop.util.TestIndexedSort
[junit] sortRandom seed: -7027178055227047295(org.apache.hadoop.util.QuickSort)
[junit] testSorted seed: 4257467073421555077(org.apache.hadoop.util.QuickSort)
[junit] testAllEqual setting min/max at 410/374(org.apache.hadoop.util.QuickSort)
[junit] sortWritable seed: -3890651608684584113(org.apache.hadoop.util.QuickSort)
[junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
[junit] sortRandom seed: 2485120508806040293(org.apache.hadoop.util.HeapSort)
[junit] testSorted seed: 1242076885210625100(org.apache.hadoop.util.HeapSort)
[junit] testAllEqual setting min/max at 353/30(org.apache.hadoop.util.HeapSort)
[junit] sortWritable seed: -5121806697500483383(org.apache.hadoop.util.HeapSort)
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.015 sec
[junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
[junit] 2009-04-10 16:09:44,899 INFO util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
[junit] 2009-04-10 16:09:45,404 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 6082
[junit] 2009-04-10 16:09:45,448 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 6082 6084 6085 ]
[junit] 2009-04-10 16:09:51,980 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 6099 6082 6097 6086 6101 6084 6088 6095 6093 ]
[junit] 2009-04-10 16:09:51,991 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException:
[junit] 2009-04-10 16:09:51,991 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
[junit] 2009-04-10 16:09:51,991 INFO util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 6082 with SIGTERM. Exit code 0
[junit] 2009-04-10 16:09:52,070 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.262 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] 2009-04-10 16:09:52,977 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.574 sec
[junit] Running org.apache.hadoop.util.TestShell
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
[junit] Running org.apache.hadoop.util.TestStringUtils
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec
BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!
Total time: 162 minutes 11 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...