You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.zones.apache.org> on 2009/03/19 19:39:25 UTC

Build failed in Hudson: Hadoop-trunk #784

See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/784/changes

Changes:

[ddas] HADOOP-5328. Fixes a problem in the renaming of job history files during job recovery. Contributed by  Amar Kamat.

[yhemanth] HADOOP-5534. Fixed a deadlock in Fair scheduler's servlet. Contributed by Rahul Kumar Singh.

[ddas] HADOOP-5521. Moving the comment on 5521 to the trunk section in CHANGES.txt.

[ddas] HADOOP-5521. Removes dependency of TestJobInProgress on RESTART_COUNT JobHistory tag. Contributed by Ravi Gummadi.

[ddas] HADOOP-5522. Documents the setup/cleanup tasks in the mapred tutorial. Contributed by Amareshwari Sriramadasu.

[ddas] HADOOP-4842. Streaming now allows specifiying a command for the combiner. Contributed by Amareshwari Sriramadasu.

[ddas] HADOOP-5486. Removes the CLASSPATH string from the command line and instead exports it in the environment. Contributed by Amareshwari Sriramadasu.

[ddas] HADOOP-5471. Fixes a problem to do with updating the log.index file in the case where a cleanup task is run. Contributed by Amareshwari Sriramadasu.

[omalley] HADOOP-5382. Support combiners in the new context object API. (omalley)

------------------------------------------
[...truncated 298126 lines...]
    [junit] Starting DataNode 0 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2 
    [junit] 2009-03-19 18:49:26,049 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1  is not formatted.
    [junit] 2009-03-19 18:49:26,049 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-03-19 18:49:26,053 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data2  is not formatted.
    [junit] 2009-03-19 18:49:26,053 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-03-19 18:49:26,084 INFO  datanode.DataNode (FSDataset.java:registerMBean(1414)) - Registered FSDatasetStatusMBean
    [junit] 2009-03-19 18:49:26,085 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 56923
    [junit] 2009-03-19 18:49:26,085 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-03-19 18:49:26,087 INFO  http.HttpServer (HttpServer.java:start(452)) - Jetty bound to port 38718
    [junit] 2009-03-19 18:49:26,087 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-03-19 18:49:26,153 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:38718
    [junit] 2009-03-19 18:49:26,153 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-03-19 18:49:26,155 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=50347
    [junit] 2009-03-19 18:49:26,155 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-03-19 18:49:26,156 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 50347: starting
    [junit] 2009-03-19 18:49:26,156 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 50347: starting
    [junit] 2009-03-19 18:49:26,156 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 50347: starting
    [junit] 2009-03-19 18:49:26,156 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 50347: starting
    [junit] 2009-03-19 18:49:26,157 INFO  datanode.DataNode (DataNode.java:startDataNode(400)) - dnRegistration = DatanodeRegistration(vesta.apache.org:56923, storageID=, infoPort=38718, ipcPort=50347)
    [junit] 2009-03-19 18:49:26,158 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2071)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:56923 storage DS-1081856601-67.195.138.9-56923-1237488566157
    [junit] 2009-03-19 18:49:26,159 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:56923
    [junit] 2009-03-19 18:49:26,161 INFO  datanode.DataNode (DataNode.java:register(548)) - New storage id DS-1081856601-67.195.138.9-56923-1237488566157 is assigned to data-node 127.0.0.1:56923
    [junit] 2009-03-19 18:49:26,162 INFO  datanode.DataNode (DataNode.java:run(1179)) - DatanodeRegistration(127.0.0.1:56923, storageID=DS-1081856601-67.195.138.9-56923-1237488566157, infoPort=38718, ipcPort=50347)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 
    [junit] 2009-03-19 18:49:26,162 INFO  datanode.DataNode (DataNode.java:offerService(679)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-03-19 18:49:26,178 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3  is not formatted.
    [junit] 2009-03-19 18:49:26,179 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-03-19 18:49:26,197 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4  is not formatted.
    [junit] 2009-03-19 18:49:26,198 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
    [junit] 2009-03-19 18:49:26,199 INFO  datanode.DataNode (DataNode.java:offerService(761)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-03-19 18:49:26,200 INFO  datanode.DataNode (DataNode.java:offerService(786)) - Starting Periodic block scanner.
    [junit] 2009-03-19 18:49:26,246 INFO  datanode.DataNode (FSDataset.java:registerMBean(1414)) - Registered FSDatasetStatusMBean
    [junit] 2009-03-19 18:49:26,246 INFO  datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 51262
    [junit] 2009-03-19 18:49:26,247 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-03-19 18:49:26,249 INFO  http.HttpServer (HttpServer.java:start(452)) - Jetty bound to port 45687
    [junit] 2009-03-19 18:49:26,249 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-03-19 18:49:26,312 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:45687
    [junit] 2009-03-19 18:49:26,353 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-03-19 18:49:26,355 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=47383
    [junit] 2009-03-19 18:49:26,355 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
    [junit] 2009-03-19 18:49:26,355 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener on 47383: starting
    [junit] 2009-03-19 18:49:26,356 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 47383: starting
    [junit] 2009-03-19 18:49:26,356 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 47383: starting
    [junit] 2009-03-19 18:49:26,357 INFO  datanode.DataNode (DataNode.java:startDataNode(400)) - dnRegistration = DatanodeRegistration(vesta.apache.org:51262, storageID=, infoPort=45687, ipcPort=47383)
    [junit] 2009-03-19 18:49:26,357 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 47383: starting
    [junit] 2009-03-19 18:49:26,359 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2071)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51262 storage DS-575830525-67.195.138.9-51262-1237488566358
    [junit] 2009-03-19 18:49:26,359 INFO  net.NetworkTopology (NetworkTopology.java:add(328)) - Adding a new node: /default-rack/127.0.0.1:51262
    [junit] 2009-03-19 18:49:26,380 INFO  datanode.DataNode (DataNode.java:register(548)) - New storage id DS-575830525-67.195.138.9-51262-1237488566358 is assigned to data-node 127.0.0.1:51262
    [junit] 2009-03-19 18:49:26,381 INFO  datanode.DataNode (DataNode.java:run(1179)) - DatanodeRegistration(127.0.0.1:51262, storageID=DS-575830525-67.195.138.9-51262-1237488566358, infoPort=45687, ipcPort=47383)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-03-19 18:49:26,386 INFO  datanode.DataNode (DataNode.java:offerService(679)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-03-19 18:49:26,425 INFO  datanode.DataNode (DataNode.java:offerService(761)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-03-19 18:49:26,426 INFO  datanode.DataNode (DataNode.java:offerService(786)) - Starting Periodic block scanner.
    [junit] 2009-03-19 18:49:26,442 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(108)) - ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-03-19 18:49:26,457 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1476)) - BLOCK* NameSystem.allocateBlock: /test. blk_8310459178380135254_1001
    [junit] 2009-03-19 18:49:26,459 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_8310459178380135254_1001 src: /127.0.0.1:44075 dest: /127.0.0.1:51262
    [junit] 2009-03-19 18:49:26,460 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_8310459178380135254_1001 src: /127.0.0.1:33105 dest: /127.0.0.1:56923
    [junit] 2009-03-19 18:49:26,463 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:33105, dest: /127.0.0.1:56923, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-2146640809, offset: 0, srvID: DS-1081856601-67.195.138.9-56923-1237488566157, blockid: blk_8310459178380135254_1001
    [junit] 2009-03-19 18:49:26,463 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_8310459178380135254_1001 terminating
    [junit] 2009-03-19 18:49:26,464 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(2998)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56923 is added to blk_8310459178380135254_1001 size 4096
    [junit] 2009-03-19 18:49:26,465 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:44075, dest: /127.0.0.1:51262, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-2146640809, offset: 0, srvID: DS-575830525-67.195.138.9-51262-1237488566358, blockid: blk_8310459178380135254_1001
    [junit] 2009-03-19 18:49:26,466 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(2998)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51262 is added to blk_8310459178380135254_1001 size 4096
    [junit] 2009-03-19 18:49:26,466 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_8310459178380135254_1001 terminating
    [junit] 2009-03-19 18:49:26,467 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1476)) - BLOCK* NameSystem.allocateBlock: /test. blk_5419782392014223592_1001
    [junit] 2009-03-19 18:49:26,469 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_5419782392014223592_1001 src: /127.0.0.1:44077 dest: /127.0.0.1:51262
    [junit] 2009-03-19 18:49:26,470 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_5419782392014223592_1001 src: /127.0.0.1:33107 dest: /127.0.0.1:56923
    [junit] 2009-03-19 18:49:26,472 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:33107, dest: /127.0.0.1:56923, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-2146640809, offset: 0, srvID: DS-1081856601-67.195.138.9-56923-1237488566157, blockid: blk_5419782392014223592_1001
    [junit] 2009-03-19 18:49:26,472 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_5419782392014223592_1001 terminating
    [junit] 2009-03-19 18:49:26,472 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(2998)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56923 is added to blk_5419782392014223592_1001 size 4096
    [junit] 2009-03-19 18:49:26,474 INFO  DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:44077, dest: /127.0.0.1:51262, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-2146640809, offset: 0, srvID: DS-575830525-67.195.138.9-51262-1237488566358, blockid: blk_5419782392014223592_1001
    [junit] 2009-03-19 18:49:26,474 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(2998)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51262 is added to blk_5419782392014223592_1001 size 4096
    [junit] 2009-03-19 18:49:26,474 INFO  datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_5419782392014223592_1001 terminating
    [junit] init: server=localhost;port=;service=DataNode;localVMPid=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-245889414
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-354088590
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1344411054
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-880728258
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort47383
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort50347
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-03-19 18:49:26,631 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 47383
    [junit] 2009-03-19 18:49:26,632 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 47383: exiting
    [junit] 2009-03-19 18:49:26,632 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 47383: exiting
    [junit] 2009-03-19 18:49:26,632 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 47383
    [junit] 2009-03-19 18:49:26,632 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 47383: exiting
    [junit] 2009-03-19 18:49:26,633 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-03-19 18:49:26,633 INFO  datanode.DataNode (DataNode.java:shutdown(587)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-03-19 18:49:26,634 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:51262, storageID=DS-575830525-67.195.138.9-51262-1237488566358, infoPort=45687, ipcPort=47383):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-03-19 18:49:27,427 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(603)) - Exiting DataBlockScanner thread.
    [junit] 2009-03-19 18:49:27,634 INFO  datanode.DataNode (DataNode.java:shutdown(587)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-03-19 18:49:27,634 INFO  datanode.DataNode (DataNode.java:run(1199)) - DatanodeRegistration(127.0.0.1:51262, storageID=DS-575830525-67.195.138.9-51262-1237488566358, infoPort=45687, ipcPort=47383):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} 
    [junit] 2009-03-19 18:49:27,635 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 47383
    [junit] 2009-03-19 18:49:27,635 INFO  datanode.DataNode (DataNode.java:shutdown(587)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-03-19 18:49:27,650 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 50347
    [junit] 2009-03-19 18:49:27,650 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 50347: exiting
    [junit] 2009-03-19 18:49:27,651 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 50347: exiting
    [junit] 2009-03-19 18:49:27,651 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 50347: exiting
    [junit] 2009-03-19 18:49:27,652 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 50347
    [junit] 2009-03-19 18:49:27,652 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-03-19 18:49:27,652 INFO  datanode.DataNode (DataNode.java:shutdown(587)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-03-19 18:49:27,652 WARN  datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:56923, storageID=DS-1081856601-67.195.138.9-56923-1237488566157, infoPort=38718, ipcPort=50347):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-03-19 18:49:28,201 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(603)) - Exiting DataBlockScanner thread.
    [junit] 2009-03-19 18:49:28,653 INFO  datanode.DataNode (DataNode.java:shutdown(587)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-03-19 18:49:28,653 INFO  datanode.DataNode (DataNode.java:run(1199)) - DatanodeRegistration(127.0.0.1:56923, storageID=DS-1081856601-67.195.138.9-56923-1237488566157, infoPort=38718, ipcPort=50347):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} 
    [junit] 2009-03-19 18:49:28,653 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 50347
    [junit] 2009-03-19 18:49:28,653 INFO  datanode.DataNode (DataNode.java:shutdown(587)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-03-19 18:49:28,783 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2346)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-03-19 18:49:28,783 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-03-19 18:49:28,783 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1044)) - Number of transactions: 3 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 19 7 
    [junit] 2009-03-19 18:49:28,785 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server on 49364
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 49364: exiting
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 49364: exiting
    [junit] 2009-03-19 18:49:28,787 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 49364: exiting
    [junit] 2009-03-19 18:49:28,787 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 49364: exiting
    [junit] 2009-03-19 18:49:28,787 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 49364: exiting
    [junit] 2009-03-19 18:49:28,787 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 49364: exiting
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 49364: exiting
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 49364: exiting
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 49364: exiting
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 49364: exiting
    [junit] 2009-03-19 18:49:28,786 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 49364
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 7.092 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.088 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-03-19 18:49:29,613 WARN  conf.Configuration (Configuration.java:<clinit>(175)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-03-19 18:49:29,626 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.188 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: -6361765879392191471(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: 6708538188833199763(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 435/261(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -5468185101547577160(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: 4097043804914909784(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: -6985991378817863271(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 64/255(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -1305556536731580880(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.006 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-03-19 18:49:31,352 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0
    [junit] 2009-03-19 18:49:31,858 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 1186
    [junit] 2009-03-19 18:49:31,937 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 1186 1189 1188 ]
    [junit] 2009-03-19 18:49:38,466 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 1202 1186 1200 1190 1188 1194 1192 1198 1196 ]
    [junit] 2009-03-19 18:49:38,478 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 1186 with SIGTERM. Exit code 0
    [junit] 2009-03-19 18:49:38,478 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-03-19 18:49:38,479 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143
    [junit] 2009-03-19 18:49:38,534 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.274 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-03-19 18:49:39,431 WARN  conf.Configuration (Configuration.java:<clinit>(175)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.58 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.192 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :769: Tests failed!

Total time: 185 minutes 57 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Hudson build is back to normal: Hadoop-trunk #785

Posted by Apache Hudson Server <hu...@hudson.zones.apache.org>.
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/785/changes