You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/04/12 14:33:04 UTC
Hadoop-Hdfs-trunk - Build # 635 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/635/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 733846 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
[echo] Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war
cactifywar:
test-cactus:
[echo] Free Ports: startup-25446 / http-25447 / https-25448
[echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[cactus] -----------------------------------------------------------------
[cactus] Running tests against Tomcat 5.x @ http://localhost:25447
[cactus] -----------------------------------------------------------------
[cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
[cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
[cactus] WARNING: multiple versions of ant detected in path for junit
[cactus] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[cactus] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
[cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.472 sec
[cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
[cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.352 sec
[cactus] Tomcat 5.x started on port [25447]
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.33 sec
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
[cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.858 sec
[cactus] Tomcat 5.x is stopping...
[cactus] Tomcat 5.x is stopped
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!
Total time: 60 minutes 21 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit
Error Message:
expected:<403> but was:<200>
Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
FAILED: org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified
Error Message:
expected:<403> but was:<200>
Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
Hadoop-Hdfs-trunk - Build # 642 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/642/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 730857 lines...]
[junit] 2011-04-19 12:21:35,904 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2011-04-19 12:21:35,905 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-19 12:21:35,905 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:44943, storageID=DS-698805625-127.0.1.1-44943-1303215695276, infoPort=45861, ipcPort=41554):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
[junit] 2011-04-19 12:21:35,905 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 41554
[junit] 2011-04-19 12:21:35,905 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-19 12:21:35,905 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-19 12:21:35,905 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-19 12:21:35,906 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-19 12:21:35,906 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
[junit] 2011-04-19 12:21:36,007 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 39285
[junit] 2011-04-19 12:21:36,007 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 39285: exiting
[junit] 2011-04-19 12:21:36,007 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 39285
[junit] 2011-04-19 12:21:36,007 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] 2011-04-19 12:21:36,007 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2011-04-19 12:21:36,007 WARN datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:49559, storageID=DS-1905199131-127.0.1.1-49559-1303215695148, infoPort=49257, ipcPort=39285):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit]
[junit] 2011-04-19 12:21:36,009 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-19 12:21:36,110 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-19 12:21:36,110 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:49559, storageID=DS-1905199131-127.0.1.1-49559-1303215695148, infoPort=49257, ipcPort=39285):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-04-19 12:21:36,110 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 39285
[junit] 2011-04-19 12:21:36,110 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-19 12:21:36,110 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-19 12:21:36,110 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-19 12:21:36,111 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-19 12:21:36,212 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-19 12:21:36,212 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 4
[junit] 2011-04-19 12:21:36,212 WARN namenode.FSNamesystem (FSNamesystem.java:run(2896)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-19 12:21:36,213 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 55002
[junit] 2011-04-19 12:21:36,214 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 55002: exiting
[junit] 2011-04-19 12:21:36,214 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 55002
[junit] 2011-04-19 12:21:36,214 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 97.315 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!
Total time: 49 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_17
Error Message:
Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50271], original=[127.0.0.1:50271]
Stack Trace:
java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50271], original=[127.0.0.1:50271]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415)
REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09
Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>
Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182xp1(TestBlockReport.java:457)
at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)
Hadoop-Hdfs-trunk - Build # 641 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/641/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 711231 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
[echo] Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war
cactifywar:
test-cactus:
[echo] Free Ports: startup-40822 / http-40823 / https-40824
[echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[cactus] -----------------------------------------------------------------
[cactus] Running tests against Tomcat 5.x @ http://localhost:40823
[cactus] -----------------------------------------------------------------
[cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
[cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
[cactus] WARNING: multiple versions of ant detected in path for junit
[cactus] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[cactus] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
[cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.481 sec
[cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
[cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
[cactus] Tomcat 5.x started on port [40823]
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.329 sec
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.318 sec
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
[cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.859 sec
[cactus] Tomcat 5.x is stopping...
[cactus] Tomcat 5.x is stopped
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!
Total time: 62 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit
Error Message:
expected:<403> but was:<200>
Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
FAILED: org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified
Error Message:
expected:<403> but was:<200>
Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
Hadoop-Hdfs-trunk - Build # 640 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/640/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 722627 lines...]
[junit]
[junit] 2011-04-17 12:35:04,371 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-17 12:35:04,371 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-17 12:35:04,371 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:53934, storageID=DS-1753167764-127.0.1.1-53934-1303043703615, infoPort=45352, ipcPort=33069):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
[junit] 2011-04-17 12:35:04,372 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 33069
[junit] 2011-04-17 12:35:04,372 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-17 12:35:04,372 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-17 12:35:04,372 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-17 12:35:04,372 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-17 12:35:04,373 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
[junit] 2011-04-17 12:35:04,473 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 46160
[junit] 2011-04-17 12:35:04,474 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 46160: exiting
[junit] 2011-04-17 12:35:04,474 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 46160
[junit] 2011-04-17 12:35:04,474 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] 2011-04-17 12:35:04,474 WARN datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:45883, storageID=DS-899432502-127.0.1.1-45883-1303043703453, infoPort=52177, ipcPort=46160):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit]
[junit] 2011-04-17 12:35:04,474 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2011-04-17 12:35:04,474 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-17 12:35:04,575 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:45883, storageID=DS-899432502-127.0.1.1-45883-1303043703453, infoPort=52177, ipcPort=46160):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-04-17 12:35:04,575 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 46160
[junit] 2011-04-17 12:35:04,575 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-17 12:35:04,575 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-17 12:35:04,575 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-17 12:35:04,575 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-17 12:35:04,676 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-17 12:35:04,676 WARN namenode.FSNamesystem (FSNamesystem.java:run(2896)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-17 12:35:04,676 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3
[junit] 2011-04-17 12:35:04,678 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 40108
[junit] 2011-04-17 12:35:04,678 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 40108: exiting
[junit] 2011-04-17 12:35:04,678 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 40108
[junit] 2011-04-17 12:35:04,678 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 99.107 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!
Total time: 61 minutes 40 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18
Error Message:
Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:54748], original=[127.0.0.1:54748]
Stack Trace:
java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:54748], original=[127.0.0.1:54748]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415)
Hadoop-Hdfs-trunk - Build # 639 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/639/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1819 lines...]
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/TestHDFSCLI.java:93: cannot find symbol
[javac] symbol : class TestCmd
[javac] location: class org.apache.hadoop.cli.TestHDFSCLI
[javac] protected Result execute(TestCmd cmd) throws Exception {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/CmdFactoryDFS.java:32: cannot find symbol
[javac] symbol : variable DFSADMIN
[javac] location: class org.apache.hadoop.cli.CmdFactoryDFS
[javac] case DFSADMIN:
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/CmdFactoryDFS.java:33: package CLICommands does not exist
[javac] executor = new CLICommands.FSCmdExecutor(tag, new DFSAdmin());
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/CmdFactoryDFS.java:36: cannot find symbol
[javac] symbol : variable CmdFactory
[javac] location: class org.apache.hadoop.cli.CmdFactoryDFS
[javac] executor = CmdFactory.getCommandExecutor(cmd, tag);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java:355: cannot find symbol
[javac] symbol : class TestCmd
[javac] location: class org.apache.hadoop.cli.util.CLITestData
[javac] new CLITestData.TestCmd(cmd, CLITestData.TestCmd.CommandType.DFSADMIN),
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java:355: cannot find symbol
[javac] symbol : variable TestCmd
[javac] location: class org.apache.hadoop.cli.util.CLITestData
[javac] new CLITestData.TestCmd(cmd, CLITestData.TestCmd.CommandType.DFSADMIN),
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 11 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:412: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:446: Compile failed; see the compiler error output for details.
Total time: 44 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk - Build # 638 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/638/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 713453 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
[echo] Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war
cactifywar:
test-cactus:
[echo] Free Ports: startup-44211 / http-44212 / https-44213
[echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
[cactus] -----------------------------------------------------------------
[cactus] Running tests against Tomcat 5.x @ http://localhost:44212
[cactus] -----------------------------------------------------------------
[cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
[cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
[cactus] WARNING: multiple versions of ant detected in path for junit
[cactus] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[cactus] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
[cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.459 sec
[cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
[cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.31 sec
[cactus] Tomcat 5.x started on port [44212]
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.324 sec
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
[cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.33 sec
[cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
[cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.867 sec
[cactus] Tomcat 5.x is stopping...
[cactus] Tomcat 5.x is stopped
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!
Total time: 52 minutes 0 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit
Error Message:
expected:<403> but was:<200>
Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
FAILED: org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified
Error Message:
expected:<403> but was:<200>
Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
Hadoop-Hdfs-trunk - Build # 637 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/637/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 695403 lines...]
[junit]
[junit] 2011-04-14 12:25:10,208 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2011-04-14 12:25:10,208 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] 2011-04-14 12:25:10,208 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-14 12:25:10,209 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:53070, storageID=DS-1559300299-127.0.1.1-53070-1302783909591, infoPort=59092, ipcPort=35341):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
[junit] 2011-04-14 12:25:10,209 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 35341
[junit] 2011-04-14 12:25:10,209 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-14 12:25:10,209 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-14 12:25:10,209 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-14 12:25:10,209 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-14 12:25:10,210 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
[junit] 2011-04-14 12:25:10,310 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 53309
[junit] 2011-04-14 12:25:10,311 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 53309: exiting
[junit] 2011-04-14 12:25:10,311 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 53309
[junit] 2011-04-14 12:25:10,311 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] 2011-04-14 12:25:10,311 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2011-04-14 12:25:10,311 WARN datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:55962, storageID=DS-1033477654-127.0.1.1-55962-1302783909440, infoPort=41539, ipcPort=53309):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit]
[junit] 2011-04-14 12:25:10,313 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-14 12:25:10,314 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-14 12:25:10,315 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:55962, storageID=DS-1033477654-127.0.1.1-55962-1302783909440, infoPort=41539, ipcPort=53309):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-04-14 12:25:10,315 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 53309
[junit] 2011-04-14 12:25:10,315 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-14 12:25:10,315 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-14 12:25:10,315 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-14 12:25:10,315 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-14 12:25:10,427 WARN namenode.FSNamesystem (FSNamesystem.java:run(2908)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-14 12:25:10,427 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-14 12:25:10,427 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3
[junit] 2011-04-14 12:25:10,429 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 60224
[junit] 2011-04-14 12:25:10,430 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 60224: exiting
[junit] 2011-04-14 12:25:10,430 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 60224
[junit] 2011-04-14 12:25:10,430 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 98.453 sec
checkfailure:
-run-test-hdfs-fault-inject-withtestcaseonly:
run-test-hdfs-fault-inject:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!
Total time: 51 minutes 48 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed
Hadoop-Hdfs-trunk - Build # 636 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/636/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 715049 lines...]
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit]
[junit] 2011-04-13 12:49:23,947 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-13 12:49:23,948 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-13 12:49:23,948 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:43506, storageID=DS-1486568985-127.0.1.1-43506-1302698963166, infoPort=48490, ipcPort=54645):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
[junit] 2011-04-13 12:49:23,948 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 54645
[junit] 2011-04-13 12:49:23,948 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-13 12:49:23,948 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-13 12:49:23,949 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-13 12:49:23,949 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-13 12:49:23,949 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
[junit] 2011-04-13 12:49:23,950 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 35691
[junit] 2011-04-13 12:49:23,950 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 35691: exiting
[junit] 2011-04-13 12:49:23,951 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 35691
[junit] 2011-04-13 12:49:23,951 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] 2011-04-13 12:49:23,951 WARN datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:50046, storageID=DS-591968703-127.0.1.1-50046-1302698963017, infoPort=48358, ipcPort=35691):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
[junit] at java.lang.Thread.run(Thread.java:662)
[junit]
[junit] 2011-04-13 12:49:23,951 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2011-04-13 12:49:23,952 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
[junit] 2011-04-13 12:49:23,952 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:50046, storageID=DS-591968703-127.0.1.1-50046-1302698963017, infoPort=48358, ipcPort=35691):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
[junit] 2011-04-13 12:49:23,952 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 35691
[junit] 2011-04-13 12:49:23,952 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2011-04-13 12:49:23,952 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2011-04-13 12:49:23,953 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2011-04-13 12:49:23,953 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2011-04-13 12:49:24,054 WARN namenode.FSNamesystem (FSNamesystem.java:run(2908)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-13 12:49:24,054 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2011-04-13 12:49:24,054 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 3 6
[junit] 2011-04-13 12:49:24,056 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 60243
[junit] 2011-04-13 12:49:24,056 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 60243: exiting
[junit] 2011-04-13 12:49:24,056 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 60243
[junit] 2011-04-13 12:49:24,057 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
[junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 99.675 sec
checkfailure:
-run-test-hdfs-fault-inject-withtestcaseonly:
run-test-hdfs-fault-inject:
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!
Total time: 76 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.fs.TestHDFSFileContextMainOperations.testCreateFlagAppendExistingFile
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.