You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/04/19 14:22:12 UTC

Hadoop-Hdfs-trunk - Build # 642 - Still Failing

See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/642/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 730857 lines...]
    [junit] 2011-04-19 12:21:35,904 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:44943, storageID=DS-698805625-127.0.1.1-44943-1303215695276, infoPort=45861, ipcPort=41554):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-19 12:21:35,905 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 41554
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-19 12:21:35,906 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-19 12:21:35,906 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 39285
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 39285: exiting
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 39285
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-19 12:21:36,007 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-19 12:21:36,007 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:49559, storageID=DS-1905199131-127.0.1.1-49559-1303215695148, infoPort=49257, ipcPort=39285):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-19 12:21:36,009 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:49559, storageID=DS-1905199131-127.0.1.1-49559-1303215695148, infoPort=49257, ipcPort=39285):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-19 12:21:36,110 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 39285
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-19 12:21:36,111 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-19 12:21:36,212 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-19 12:21:36,212 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 4 
    [junit] 2011-04-19 12:21:36,212 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2896)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-19 12:21:36,213 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 55002
    [junit] 2011-04-19 12:21:36,214 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 55002: exiting
    [junit] 2011-04-19 12:21:36,214 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 55002
    [junit] 2011-04-19 12:21:36,214 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 97.315 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 49 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_17

Error Message:
Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50271], original=[127.0.0.1:50271]

Stack Trace:
java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50271], original=[127.0.0.1:50271]
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182xp1(TestBlockReport.java:457)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)