You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/02/05 13:45:07 UTC

Hadoop-Hdfs-trunk - Build # 573 - Still Failing

See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/573/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 696949 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-02-05 12:44:19,608 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-05 12:44:19,698 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-05 12:44:19,708 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:60658, storageID=DS-1815650214-127.0.1.1-60658-1296909848618, infoPort=37402, ipcPort=54671):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-05 12:44:19,709 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 54671
    [junit] 2011-02-05 12:44:19,709 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-05 12:44:19,709 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-05 12:44:19,709 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-05 12:44:19,710 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-05 12:44:19,812 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-05 12:44:19,812 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-05 12:44:19,812 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-02-05 12:44:19,815 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 45388
    [junit] 2011-02-05 12:44:19,815 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 45388: exiting
    [junit] 2011-02-05 12:44:19,816 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 45388
    [junit] 2011-02-05 12:44:19,816 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 45388: exiting
    [junit] 2011-02-05 12:44:19,816 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-05 12:44:19,816 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 45388: exiting
    [junit] 2011-02-05 12:44:19,816 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 45388: exiting
    [junit] 2011-02-05 12:44:19,816 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 45388: exiting
    [junit] 2011-02-05 12:44:19,817 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 45388: exiting
    [junit] 2011-02-05 12:44:19,817 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 45388: exiting
    [junit] 2011-02-05 12:44:19,817 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 45388: exiting
    [junit] 2011-02-05 12:44:19,817 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 45388: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.904 sec
    [junit] 2011-02-05 12:44:19,820 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 45388: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 69 minutes 51 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2j2e00jr71(TestBlockReport.java:408)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08(TestBlockReport.java:390)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore

Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9bd71805a213b5bbe19c7b70e192966e but expecting f835a1af7ec34ec3e0955f537280da4b

Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9bd71805a213b5bbe19c7b70e192966e but expecting f835a1af7ec34ec3e0955f537280da4b
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:670)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:710)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:603)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:480)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4ubl(TestStorageRestore.java:316)
	at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)




Hadoop-Hdfs-trunk - Build # 642 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/642/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 730857 lines...]
    [junit] 2011-04-19 12:21:35,904 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:44943, storageID=DS-698805625-127.0.1.1-44943-1303215695276, infoPort=45861, ipcPort=41554):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-19 12:21:35,905 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 41554
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-19 12:21:35,905 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-19 12:21:35,906 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-19 12:21:35,906 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 39285
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 39285: exiting
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 39285
    [junit] 2011-04-19 12:21:36,007 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-19 12:21:36,007 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-19 12:21:36,007 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:49559, storageID=DS-1905199131-127.0.1.1-49559-1303215695148, infoPort=49257, ipcPort=39285):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-19 12:21:36,009 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:49559, storageID=DS-1905199131-127.0.1.1-49559-1303215695148, infoPort=49257, ipcPort=39285):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-19 12:21:36,110 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 39285
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-19 12:21:36,110 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-19 12:21:36,111 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-19 12:21:36,212 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-19 12:21:36,212 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 4 
    [junit] 2011-04-19 12:21:36,212 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2896)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-19 12:21:36,213 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 55002
    [junit] 2011-04-19 12:21:36,214 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 55002: exiting
    [junit] 2011-04-19 12:21:36,214 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 55002
    [junit] 2011-04-19 12:21:36,214 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 97.315 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 49 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_17

Error Message:
Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50271], original=[127.0.0.1:50271]

Stack Trace:
java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50271], original=[127.0.0.1:50271]
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182xp1(TestBlockReport.java:457)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 641 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/641/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 711231 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-40822 / http-40823 / https-40824
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:40823
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.481 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
   [cactus] Tomcat 5.x started on port [40823]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.329 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.318 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.859 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 62 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 640 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/640/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 722627 lines...]
    [junit] 
    [junit] 2011-04-17 12:35:04,371 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-17 12:35:04,371 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-17 12:35:04,371 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:53934, storageID=DS-1753167764-127.0.1.1-53934-1303043703615, infoPort=45352, ipcPort=33069):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-17 12:35:04,372 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 33069
    [junit] 2011-04-17 12:35:04,372 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-17 12:35:04,372 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-17 12:35:04,372 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-17 12:35:04,372 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-17 12:35:04,373 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-17 12:35:04,473 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 46160
    [junit] 2011-04-17 12:35:04,474 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 46160: exiting
    [junit] 2011-04-17 12:35:04,474 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 46160
    [junit] 2011-04-17 12:35:04,474 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-17 12:35:04,474 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:45883, storageID=DS-899432502-127.0.1.1-45883-1303043703453, infoPort=52177, ipcPort=46160):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-17 12:35:04,474 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-17 12:35:04,474 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-17 12:35:04,575 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:45883, storageID=DS-899432502-127.0.1.1-45883-1303043703453, infoPort=52177, ipcPort=46160):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-17 12:35:04,575 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 46160
    [junit] 2011-04-17 12:35:04,575 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-17 12:35:04,575 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-17 12:35:04,575 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-17 12:35:04,575 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-17 12:35:04,676 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-17 12:35:04,676 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2896)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-17 12:35:04,676 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3 
    [junit] 2011-04-17 12:35:04,678 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 40108
    [junit] 2011-04-17 12:35:04,678 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 40108: exiting
    [junit] 2011-04-17 12:35:04,678 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 40108
    [junit] 2011-04-17 12:35:04,678 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 99.107 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 61 minutes 40 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18

Error Message:
Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:54748], original=[127.0.0.1:54748]

Stack Trace:
java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:54748], original=[127.0.0.1:54748]
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415)




Hadoop-Hdfs-trunk - Build # 639 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/639/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1819 lines...]
    [javac]                                              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/TestHDFSCLI.java:93: cannot find symbol
    [javac] symbol  : class TestCmd
    [javac] location: class org.apache.hadoop.cli.TestHDFSCLI
    [javac]   protected Result execute(TestCmd cmd) throws Exception {
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/CmdFactoryDFS.java:32: cannot find symbol
    [javac] symbol  : variable DFSADMIN
    [javac] location: class org.apache.hadoop.cli.CmdFactoryDFS
    [javac]       case DFSADMIN:
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/CmdFactoryDFS.java:33: package CLICommands does not exist
    [javac]         executor = new CLICommands.FSCmdExecutor(tag, new DFSAdmin());
    [javac]                                   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/cli/CmdFactoryDFS.java:36: cannot find symbol
    [javac] symbol  : variable CmdFactory
    [javac] location: class org.apache.hadoop.cli.CmdFactoryDFS
    [javac]         executor = CmdFactory.getCommandExecutor(cmd, tag);
    [javac]                    ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java:355: cannot find symbol
    [javac] symbol  : class TestCmd
    [javac] location: class org.apache.hadoop.cli.util.CLITestData
    [javac]           new CLITestData.TestCmd(cmd, CLITestData.TestCmd.CommandType.DFSADMIN),
    [javac]                          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java:355: cannot find symbol
    [javac] symbol  : variable TestCmd
    [javac] location: class org.apache.hadoop.cli.util.CLITestData
    [javac]           new CLITestData.TestCmd(cmd, CLITestData.TestCmd.CommandType.DFSADMIN),
    [javac]                                                   ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 11 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:412: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:446: Compile failed; see the compiler error output for details.

Total time: 44 seconds


======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 638 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/638/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 713453 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-44211 / http-44212 / https-44213
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:44212
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.459 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.31 sec
   [cactus] Tomcat 5.x started on port [44212]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.324 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.33 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.867 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 52 minutes 0 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 637 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/637/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 695403 lines...]
    [junit] 
    [junit] 2011-04-14 12:25:10,208 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-14 12:25:10,208 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-14 12:25:10,208 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-14 12:25:10,209 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:53070, storageID=DS-1559300299-127.0.1.1-53070-1302783909591, infoPort=59092, ipcPort=35341):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-14 12:25:10,209 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 35341
    [junit] 2011-04-14 12:25:10,209 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-14 12:25:10,209 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-14 12:25:10,209 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-14 12:25:10,209 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-14 12:25:10,210 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-14 12:25:10,310 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 53309
    [junit] 2011-04-14 12:25:10,311 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 53309: exiting
    [junit] 2011-04-14 12:25:10,311 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 53309
    [junit] 2011-04-14 12:25:10,311 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-14 12:25:10,311 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-14 12:25:10,311 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:55962, storageID=DS-1033477654-127.0.1.1-55962-1302783909440, infoPort=41539, ipcPort=53309):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-14 12:25:10,313 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-14 12:25:10,314 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-14 12:25:10,315 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:55962, storageID=DS-1033477654-127.0.1.1-55962-1302783909440, infoPort=41539, ipcPort=53309):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-14 12:25:10,315 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 53309
    [junit] 2011-04-14 12:25:10,315 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-14 12:25:10,315 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-14 12:25:10,315 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-14 12:25:10,315 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-14 12:25:10,427 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2908)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-14 12:25:10,427 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-14 12:25:10,427 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-04-14 12:25:10,429 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 60224
    [junit] 2011-04-14 12:25:10,430 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 60224: exiting
    [junit] 2011-04-14 12:25:10,430 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 60224
    [junit] 2011-04-14 12:25:10,430 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 98.453 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 51 minutes 48 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk - Build # 636 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/636/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 715049 lines...]
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-13 12:49:23,947 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-13 12:49:23,948 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-13 12:49:23,948 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:43506, storageID=DS-1486568985-127.0.1.1-43506-1302698963166, infoPort=48490, ipcPort=54645):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-13 12:49:23,948 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 54645
    [junit] 2011-04-13 12:49:23,948 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-13 12:49:23,948 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-13 12:49:23,949 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-13 12:49:23,949 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-13 12:49:23,949 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-13 12:49:23,950 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 35691
    [junit] 2011-04-13 12:49:23,950 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 35691: exiting
    [junit] 2011-04-13 12:49:23,951 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 35691
    [junit] 2011-04-13 12:49:23,951 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-13 12:49:23,951 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:50046, storageID=DS-591968703-127.0.1.1-50046-1302698963017, infoPort=48358, ipcPort=35691):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-13 12:49:23,951 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-13 12:49:23,952 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-13 12:49:23,952 INFO  datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:50046, storageID=DS-591968703-127.0.1.1-50046-1302698963017, infoPort=48358, ipcPort=35691):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-13 12:49:23,952 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 35691
    [junit] 2011-04-13 12:49:23,952 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-13 12:49:23,952 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-13 12:49:23,953 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-13 12:49:23,953 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-13 12:49:24,054 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2908)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-13 12:49:24,054 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-13 12:49:24,054 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 3 6 
    [junit] 2011-04-13 12:49:24,056 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 60243
    [junit] 2011-04-13 12:49:24,056 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 60243: exiting
    [junit] 2011-04-13 12:49:24,056 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 60243
    [junit] 2011-04-13 12:49:24,057 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 99.675 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 76 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.fs.TestHDFSFileContextMainOperations.testCreateFlagAppendExistingFile

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-trunk - Build # 635 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/635/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 733846 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-25446 / http-25447 / https-25448
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:25447
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.472 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.352 sec
   [cactus] Tomcat 5.x started on port [25447]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.33 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.858 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 60 minutes 21 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 634 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/634/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 719226 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-11 12:23:49,932 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-11 12:23:49,932 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-11 12:23:49,933 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:47347, storageID=DS-582507416-127.0.1.1-47347-1302524629514, infoPort=54226, ipcPort=44767):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-11 12:23:49,933 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 44767
    [junit] 2011-04-11 12:23:49,933 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-11 12:23:49,933 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-11 12:23:49,933 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-11 12:23:49,933 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-11 12:23:49,934 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-11 12:23:50,034 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 48638
    [junit] 2011-04-11 12:23:50,035 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 48638: exiting
    [junit] 2011-04-11 12:23:50,035 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-11 12:23:50,035 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-11 12:23:50,035 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 48638
    [junit] 2011-04-11 12:23:50,035 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:34265, storageID=DS-1376291308-127.0.1.1-34265-1302524629363, infoPort=47209, ipcPort=48638):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-11 12:23:50,037 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-11 12:23:50,038 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-11 12:23:50,038 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:34265, storageID=DS-1376291308-127.0.1.1-34265-1302524629363, infoPort=47209, ipcPort=48638):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-11 12:23:50,038 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 48638
    [junit] 2011-04-11 12:23:50,038 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-11 12:23:50,039 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-11 12:23:50,039 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-11 12:23:50,039 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-11 12:23:50,140 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-11 12:23:50,140 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-11 12:23:50,140 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 4 
    [junit] 2011-04-11 12:23:50,142 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 47278
    [junit] 2011-04-11 12:23:50,142 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 47278: exiting
    [junit] 2011-04-11 12:23:50,142 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 47278
    [junit] 2011-04-11 12:23:50,142 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 95.075 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 50 minutes 31 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182rgt(TestBlockReport.java:451)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 633 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/633/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 756184 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-54162 / http-54163 / https-54164
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:54163
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.5 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.352 sec
   [cactus] Tomcat 5.x started on port [54163]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.341 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.868 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 60 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 632 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/632/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 721228 lines...]
    [junit] 2011-04-09 12:23:33,036 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-09 12:23:33,036 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-09 12:23:33,037 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:33367, storageID=DS-2055774178-127.0.1.1-33367-1302351812614, infoPort=48133, ipcPort=32836):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-09 12:23:33,037 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 32836
    [junit] 2011-04-09 12:23:33,037 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-09 12:23:33,037 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-09 12:23:33,037 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-09 12:23:33,037 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-09 12:23:33,038 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-09 12:23:33,138 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 34146
    [junit] 2011-04-09 12:23:33,139 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 34146: exiting
    [junit] 2011-04-09 12:23:33,139 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 34146
    [junit] 2011-04-09 12:23:33,139 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-09 12:23:33,139 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-09 12:23:33,139 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:45214, storageID=DS-1160250199-127.0.1.1-45214-1302351812466, infoPort=56656, ipcPort=34146):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-09 12:23:33,141 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-09 12:23:33,242 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-09 12:23:33,242 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:45214, storageID=DS-1160250199-127.0.1.1-45214-1302351812466, infoPort=56656, ipcPort=34146):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-09 12:23:33,242 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 34146
    [junit] 2011-04-09 12:23:33,242 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-09 12:23:33,242 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-09 12:23:33,242 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-09 12:23:33,243 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-09 12:23:33,344 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-09 12:23:33,344 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 7 3 
    [junit] 2011-04-09 12:23:33,344 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-09 12:23:33,345 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 34612
    [junit] 2011-04-09 12:23:33,346 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 34612: exiting
    [junit] 2011-04-09 12:23:33,346 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 34612
    [junit] 2011-04-09 12:23:33,346 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 94.844 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 50 minutes 16 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
	at org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.runTest29_30(TestFiDataTransferProtocol2.java:153)
	at org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29(TestFiDataTransferProtocol2.java:251)




Hadoop-Hdfs-trunk - Build # 631 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/631/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 736597 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-08 12:22:13,792 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-08 12:22:13,792 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-08 12:22:13,793 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:35209, storageID=DS-1023019400-127.0.1.1-35209-1302265323192, infoPort=53571, ipcPort=35154):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-08 12:22:13,793 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 35154
    [junit] 2011-04-08 12:22:13,793 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-08 12:22:13,793 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-08 12:22:13,794 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-08 12:22:13,794 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-08 12:22:13,794 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-08 12:22:13,896 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 52755
    [junit] 2011-04-08 12:22:13,896 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 52755: exiting
    [junit] 2011-04-08 12:22:13,896 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-08 12:22:13,896 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-08 12:22:13,897 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:60366, storageID=DS-1346925635-127.0.1.1-60366-1302265323017, infoPort=51510, ipcPort=52755):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-08 12:22:13,897 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 52755
    [junit] 2011-04-08 12:22:13,899 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-08 12:22:13,999 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-08 12:22:13,999 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:60366, storageID=DS-1346925635-127.0.1.1-60366-1302265323017, infoPort=51510, ipcPort=52755):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-08 12:22:14,000 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 52755
    [junit] 2011-04-08 12:22:14,000 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-08 12:22:14,000 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-08 12:22:14,000 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-08 12:22:14,001 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-08 12:22:14,102 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-08 12:22:14,103 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 4 
    [junit] 2011-04-08 12:22:14,103 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-08 12:22:14,104 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 55573
    [junit] 2011-04-08 12:22:14,105 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 55573: exiting
    [junit] 2011-04-08 12:22:14,105 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 55573
    [junit] 2011-04-08 12:22:14,105 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.623 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 48 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testCount

Error Message:
not supposed to get here

Stack Trace:
java.lang.RuntimeException: not supposed to get here
	at org.apache.hadoop.fs.shell.FsCommand.run(FsCommand.java:51)
	at org.apache.hadoop.fs.shell.Command.runAll(Command.java:100)
	at org.apache.hadoop.hdfs.TestDFSShell.runCount(TestDFSShell.java:737)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2xc567w1396(TestDFSShell.java:705)
	at org.apache.hadoop.hdfs.TestDFSShell.testCount(TestDFSShell.java:694)




Hadoop-Hdfs-trunk - Build # 630 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/630/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 716427 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-36789 / http-36790 / https-36791
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:36790
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.483 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.323 sec
   [cactus] Tomcat 5.x started on port [36790]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.353 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.871 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 59 minutes 36 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 629 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/629/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 721564 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-21331 / http-21332 / https-21333
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:21332
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.464 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.373 sec
   [cactus] Tomcat 5.x started on port [21332]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.315 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.342 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.823 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 49 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 628 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/628/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 739272 lines...]
    [junit] 	... 11 more
    [junit] 2011-04-05 12:22:42,478 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-05 12:22:42,478 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-05 12:22:42,479 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:60473, storageID=DS-2030027606-127.0.1.1-60473-1302006152029, infoPort=37469, ipcPort=48812):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-05 12:22:42,479 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 48812
    [junit] 2011-04-05 12:22:42,479 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-05 12:22:42,479 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-05 12:22:42,480 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-05 12:22:42,480 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-05 12:22:42,480 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-05 12:22:42,582 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 46639
    [junit] 2011-04-05 12:22:42,582 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 46639: exiting
    [junit] 2011-04-05 12:22:42,583 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 46639
    [junit] 2011-04-05 12:22:42,583 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-05 12:22:42,583 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:48393, storageID=DS-1552435921-127.0.1.1-48393-1302006151861, infoPort=37777, ipcPort=46639):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-05 12:22:42,584 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-05 12:22:42,684 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-05 12:22:42,685 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:48393, storageID=DS-1552435921-127.0.1.1-48393-1302006151861, infoPort=37777, ipcPort=46639):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-05 12:22:42,685 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 46639
    [junit] 2011-04-05 12:22:42,685 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-05 12:22:42,685 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-05 12:22:42,685 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-05 12:22:42,686 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-05 12:22:42,788 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-05 12:22:42,788 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-05 12:22:42,788 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-04-05 12:22:42,790 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 58226
    [junit] 2011-04-05 12:22:42,790 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 58226: exiting
    [junit] 2011-04-05 12:22:42,790 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 58226
    [junit] 2011-04-05 12:22:42,790 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.457 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 49 minutes 32 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
	at org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.runTest29_30(TestFiDataTransferProtocol2.java:153)
	at org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29(TestFiDataTransferProtocol2.java:251)




Hadoop-Hdfs-trunk - Build # 627 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/627/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 718791 lines...]
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-04-04 12:47:28,815 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-04 12:47:28,815 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-04 12:47:28,815 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:57751, storageID=DS-1453852092-127.0.1.1-57751-1301921238223, infoPort=40641, ipcPort=44415):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-04 12:47:28,816 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 44415
    [junit] 2011-04-04 12:47:28,816 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-04 12:47:28,816 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-04 12:47:28,816 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-04 12:47:28,816 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-04 12:47:28,817 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-04 12:47:28,918 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 48120
    [junit] 2011-04-04 12:47:28,918 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 48120: exiting
    [junit] 2011-04-04 12:47:28,932 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 48120
    [junit] 2011-04-04 12:47:28,933 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-04 12:47:28,933 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:56077, storageID=DS-244328423-127.0.1.1-56077-1301921238026, infoPort=60079, ipcPort=48120):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-04 12:47:28,933 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-04 12:47:29,033 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-04 12:47:29,034 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:56077, storageID=DS-244328423-127.0.1.1-56077-1301921238026, infoPort=60079, ipcPort=48120):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-04 12:47:29,034 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 48120
    [junit] 2011-04-04 12:47:29,034 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-04 12:47:29,034 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-04 12:47:29,035 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-04 12:47:29,035 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-04 12:47:29,137 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-04 12:47:29,137 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-04 12:47:29,137 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 11 2 
    [junit] 2011-04-04 12:47:29,139 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 45063
    [junit] 2011-04-04 12:47:29,139 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 45063: exiting
    [junit] 2011-04-04 12:47:29,140 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 45063
    [junit] 2011-04-04 12:47:29,140 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.543 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 73 minutes 42 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2j2e00jrg8(TestBlockReport.java:414)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08(TestBlockReport.java:390)




Hadoop-Hdfs-trunk - Build # 626 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/626/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 723583 lines...]
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-04-03 12:23:29,427 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-03 12:23:29,427 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-03 12:23:29,427 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:51460, storageID=DS-1312723792-127.0.1.1-51460-1301833398906, infoPort=42430, ipcPort=50455):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-04-03 12:23:29,427 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 50455
    [junit] 2011-04-03 12:23:29,428 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-03 12:23:29,428 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-03 12:23:29,428 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-03 12:23:29,428 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-03 12:23:29,429 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-04-03 12:23:29,530 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 55674
    [junit] 2011-04-03 12:23:29,530 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 55674: exiting
    [junit] 2011-04-03 12:23:29,531 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 55674
    [junit] 2011-04-03 12:23:29,531 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-04-03 12:23:29,531 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:52044, storageID=DS-2081569357-127.0.1.1-52044-1301833398732, infoPort=47230, ipcPort=55674):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-04-03 12:23:29,531 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-04-03 12:23:29,632 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-04-03 12:23:29,632 INFO  datanode.DataNode (DataNode.java:run(1496)) - DatanodeRegistration(127.0.0.1:52044, storageID=DS-2081569357-127.0.1.1-52044-1301833398732, infoPort=47230, ipcPort=55674):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-04-03 12:23:29,632 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 55674
    [junit] 2011-04-03 12:23:29,633 INFO  datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-04-03 12:23:29,633 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-04-03 12:23:29,633 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-04-03 12:23:29,634 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-04-03 12:23:29,645 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-03 12:23:29,645 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-04-03 12:23:29,645 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 3 
    [junit] 2011-04-03 12:23:29,647 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 37535
    [junit] 2011-04-03 12:23:29,647 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 37535: exiting
    [junit] 2011-04-03 12:23:29,647 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 37535
    [junit] 2011-04-03 12:23:29,647 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.37 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 49 minutes 53 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182rgt(TestBlockReport.java:457)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 625 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/625/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 713798 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-34955 / http-34956 / https-34957
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:34956
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.455 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.323 sec
   [cactus] Tomcat 5.x started on port [34956]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.336 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.316 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.861 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 49 minutes 34 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 624 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/624/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 738293 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-25460 / http-25461 / https-25462
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:25461
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.489 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.492 sec
   [cactus] Tomcat 5.x started on port [25461]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.329 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.34 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.882 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 50 minutes 15 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 623 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/623/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 710777 lines...]
    [junit] 2011-03-31 12:33:18,494 INFO  datanode.DataNode (BlockReceiver.java:run(926)) - PacketResponder blk_8517475587862166522_1001 0 : Thread is interrupted.
    [junit] 2011-03-31 12:33:18,494 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 3
    [junit] 2011-03-31 12:33:18,494 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-03-31 12:33:18,495 INFO  datanode.DataNode (BlockReceiver.java:run(1010)) - PacketResponder 0 for block blk_8517475587862166522_1001 terminating
    [junit] 2011-03-31 12:33:18,495 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:51778, storageID=DS-1795582721-127.0.1.1-51778-1301574787630, infoPort=48804, ipcPort=48740)
    [junit] 2011-03-31 12:33:18,496 ERROR datanode.DataNode (DataXceiver.java:run(132)) - DatanodeRegistration(127.0.0.1:51778, storageID=DS-1795582721-127.0.1.1-51778-1301574787630, infoPort=48804, ipcPort=48740):DataXceiver
    [junit] java.lang.RuntimeException: java.lang.InterruptedException: sleep interrupted
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:82)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:346)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:463)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:651)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:360)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:332)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-31 12:33:18,497 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-31 12:33:18,598 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-31 12:33:18,598 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:51778, storageID=DS-1795582721-127.0.1.1-51778-1301574787630, infoPort=48804, ipcPort=48740):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-31 12:33:18,598 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 48740
    [junit] 2011-03-31 12:33:18,598 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-31 12:33:18,599 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-31 12:33:18,599 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-31 12:33:18,599 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-31 12:33:18,701 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-31 12:33:18,701 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 3 
    [junit] 2011-03-31 12:33:18,701 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2857)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-31 12:33:18,703 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 59355
    [junit] 2011-03-31 12:33:18,703 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 59355: exiting
    [junit] 2011-03-31 12:33:18,703 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 59355
    [junit] 2011-03-31 12:33:18,703 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.512 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 59 minutes 47 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 112047, r=ReplicaInPipeline, blk_5010047870379614353_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_5010047870379614353   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 112047, r=ReplicaInPipeline, blk_5010047870379614353_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_5010047870379614353
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1375)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tgw(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)




Hadoop-Hdfs-trunk - Build # 622 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/622/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 698802 lines...]
    [junit] 2011-03-30 12:22:26,214 INFO  datanode.DataNode (BlockReceiver.java:run(926)) - PacketResponder blk_-4878957023449505870_1001 0 : Thread is interrupted.
    [junit] 2011-03-30 12:22:26,214 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:51648, storageID=DS-1891937375-127.0.1.1-51648-1301487735313, infoPort=55741, ipcPort=57432)
    [junit] 2011-03-30 12:22:26,214 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 2
    [junit] 2011-03-30 12:22:26,214 INFO  datanode.DataNode (BlockReceiver.java:run(1010)) - PacketResponder 0 for block blk_-4878957023449505870_1001 terminating
    [junit] 2011-03-30 12:22:26,214 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:51648, storageID=DS-1891937375-127.0.1.1-51648-1301487735313, infoPort=55741, ipcPort=57432)
    [junit] 2011-03-30 12:22:26,215 ERROR datanode.DataNode (DataXceiver.java:run(132)) - DatanodeRegistration(127.0.0.1:51648, storageID=DS-1891937375-127.0.1.1-51648-1301487735313, infoPort=55741, ipcPort=57432):DataXceiver
    [junit] java.lang.RuntimeException: java.lang.InterruptedException: sleep interrupted
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:82)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:346)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:463)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:651)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:360)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:332)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-30 12:22:26,217 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-30 12:22:26,317 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-30 12:22:26,317 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:51648, storageID=DS-1891937375-127.0.1.1-51648-1301487735313, infoPort=55741, ipcPort=57432):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-30 12:22:26,317 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 57432
    [junit] 2011-03-30 12:22:26,318 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-30 12:22:26,318 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-30 12:22:26,318 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-30 12:22:26,319 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-30 12:22:26,420 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-30 12:22:26,420 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-30 12:22:26,420 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 7 2 
    [junit] 2011-03-30 12:22:26,422 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 41503
    [junit] 2011-03-30 12:22:26,423 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 41503: exiting
    [junit] 2011-03-30 12:22:26,423 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 41503
    [junit] 2011-03-30 12:22:26,423 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.721 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 48 minutes 57 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 130213, r=ReplicaInPipeline, blk_-2736525394384087704_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-2736525394384087704   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 130213, r=ReplicaInPipeline, blk_-2736525394384087704_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-2736525394384087704
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1375)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tgv(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n131j(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot1396(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk13a5(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613dt(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13es(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813g1(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 621 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/621/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 699203 lines...]
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-29 12:32:37,695 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-29 12:32:37,696 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-29 12:32:37,696 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:53973, storageID=DS-777661655-127.0.1.1-53973-1301401947074, infoPort=47607, ipcPort=60453):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-29 12:32:37,696 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 60453
    [junit] 2011-03-29 12:32:37,696 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-29 12:32:37,698 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-29 12:32:37,698 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-29 12:32:37,698 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-29 12:32:37,699 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-29 12:32:37,800 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 41601
    [junit] 2011-03-29 12:32:37,800 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 41601: exiting
    [junit] 2011-03-29 12:32:37,801 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 41601
    [junit] 2011-03-29 12:32:37,801 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-03-29 12:32:37,801 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:47815, storageID=DS-976453409-127.0.1.1-47815-1301401946890, infoPort=49694, ipcPort=41601):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-29 12:32:37,801 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-29 12:32:37,902 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-29 12:32:37,902 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:47815, storageID=DS-976453409-127.0.1.1-47815-1301401946890, infoPort=49694, ipcPort=41601):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-29 12:32:37,902 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 41601
    [junit] 2011-03-29 12:32:37,903 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-29 12:32:37,903 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-29 12:32:37,903 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-29 12:32:37,903 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-29 12:32:38,005 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-29 12:32:38,005 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 9 1 
    [junit] 2011-03-29 12:32:38,005 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-29 12:32:38,007 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 37641
    [junit] 2011-03-29 12:32:38,007 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 37641: exiting
    [junit] 2011-03-29 12:32:38,008 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 37641
    [junit] 2011-03-29 12:32:38,008 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.599 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 59 minutes 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 102717, r=ReplicaInPipeline, blk_-3557091731890250719_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-3557091731890250719   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 102717, r=ReplicaInPipeline, blk_-3557091731890250719_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-3557091731890250719
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1376)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tgv(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n131j(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot1396(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk13a5(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613dt(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13es(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813g1(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 620 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/620/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 690185 lines...]
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-28 12:32:38,702 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-28 12:32:38,702 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-28 12:32:38,703 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:59927, storageID=DS-1180270094-127.0.1.1-59927-1301315548093, infoPort=44831, ipcPort=43258):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-28 12:32:38,703 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 43258
    [junit] 2011-03-28 12:32:38,703 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-28 12:32:38,703 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-28 12:32:38,703 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-28 12:32:38,704 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-28 12:32:38,704 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-28 12:32:38,805 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 47290
    [junit] 2011-03-28 12:32:38,806 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 47290: exiting
    [junit] 2011-03-28 12:32:38,806 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 47290
    [junit] 2011-03-28 12:32:38,806 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-03-28 12:32:38,806 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:53628, storageID=DS-1160082990-127.0.1.1-53628-1301315547913, infoPort=32856, ipcPort=47290):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-28 12:32:38,806 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-28 12:32:38,907 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-28 12:32:38,908 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:53628, storageID=DS-1160082990-127.0.1.1-53628-1301315547913, infoPort=32856, ipcPort=47290):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-28 12:32:38,908 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 47290
    [junit] 2011-03-28 12:32:38,908 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-28 12:32:38,908 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-28 12:32:38,908 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-28 12:32:38,909 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-28 12:32:39,012 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-28 12:32:39,012 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3 
    [junit] 2011-03-28 12:32:39,012 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-28 12:32:39,014 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 43074
    [junit] 2011-03-28 12:32:39,015 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 43074: exiting
    [junit] 2011-03-28 12:32:39,015 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 43074
    [junit] 2011-03-28 12:32:39,015 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.6 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 59 minutes 5 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 79419, r=ReplicaInPipeline, blk_-6718005118883221936_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-6718005118883221936   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 79419, r=ReplicaInPipeline, blk_-6718005118883221936_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-6718005118883221936
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1387)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tgc(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n1310(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot138n(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk139m(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613da(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13e9(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813fi(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 619 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/619/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 713643 lines...]
    [junit] 
    [junit] 2011-03-27 12:31:39,099 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:43089, storageID=DS-402378168-127.0.1.1-43089-1301229088198, infoPort=54919, ipcPort=41226)
    [junit] 2011-03-27 12:31:39,099 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:43089, storageID=DS-402378168-127.0.1.1-43089-1301229088198, infoPort=54919, ipcPort=41226)
    [junit] 2011-03-27 12:31:39,098 INFO  datanode.DataNode (BlockReceiver.java:run(926)) - PacketResponder blk_-573293244434035474_1001 0 : Thread is interrupted.
    [junit] 2011-03-27 12:31:39,099 INFO  datanode.DataNode (BlockReceiver.java:run(1010)) - PacketResponder 0 for block blk_-573293244434035474_1001 terminating
    [junit] 2011-03-27 12:31:39,099 ERROR datanode.DataNode (DataXceiver.java:run(132)) - DatanodeRegistration(127.0.0.1:43089, storageID=DS-402378168-127.0.1.1-43089-1301229088198, infoPort=54919, ipcPort=41226):DataXceiver
    [junit] java.lang.RuntimeException: java.lang.InterruptedException: sleep interrupted
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:82)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:346)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:463)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:651)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:393)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:332)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-27 12:31:39,101 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-27 12:31:39,201 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-27 12:31:39,201 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:43089, storageID=DS-402378168-127.0.1.1-43089-1301229088198, infoPort=54919, ipcPort=41226):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-27 12:31:39,202 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 41226
    [junit] 2011-03-27 12:31:39,202 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-27 12:31:39,202 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-27 12:31:39,202 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-27 12:31:39,203 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-27 12:31:39,304 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-27 12:31:39,304 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 10 3 
    [junit] 2011-03-27 12:31:39,305 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-27 12:31:39,306 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 42780
    [junit] 2011-03-27 12:31:39,307 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 42780: exiting
    [junit] 2011-03-27 12:31:39,307 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 42780
    [junit] 2011-03-27 12:31:39,308 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.42 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 58 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 84255, r=ReplicaInPipeline, blk_-8473050276165535237_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 65536   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-8473050276165535237   bytesAcked=0   bytesOnDisk=65536

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 84255, r=ReplicaInPipeline, blk_-8473050276165535237_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 65536
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-8473050276165535237
  bytesAcked=0
  bytesOnDisk=65536
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1387)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tgc(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n1310(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot138n(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk139m(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613da(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13e9(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813fi(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 618 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/618/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 711591 lines...]
    [junit] 2011-03-26 12:22:04,765 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 2
    [junit] 2011-03-26 12:22:04,766 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:54302, storageID=DS-279690435-127.0.1.1-54302-1301142113914, infoPort=34030, ipcPort=42716)
    [junit] 2011-03-26 12:22:04,766 INFO  datanode.DataNode (BlockReceiver.java:run(926)) - PacketResponder blk_-8470746226147512846_1001 0 : Thread is interrupted.
    [junit] 2011-03-26 12:22:04,766 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:54302, storageID=DS-279690435-127.0.1.1-54302-1301142113914, infoPort=34030, ipcPort=42716)
    [junit] 2011-03-26 12:22:04,766 INFO  datanode.DataNode (BlockReceiver.java:run(1010)) - PacketResponder 0 for block blk_-8470746226147512846_1001 terminating
    [junit] 2011-03-26 12:22:04,767 ERROR datanode.DataNode (DataXceiver.java:run(132)) - DatanodeRegistration(127.0.0.1:54302, storageID=DS-279690435-127.0.1.1-54302-1301142113914, infoPort=34030, ipcPort=42716):DataXceiver
    [junit] java.lang.RuntimeException: java.lang.InterruptedException: sleep interrupted
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:82)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:346)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:463)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:651)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:393)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:332)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-26 12:22:04,768 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-26 12:22:04,868 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-26 12:22:04,868 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:54302, storageID=DS-279690435-127.0.1.1-54302-1301142113914, infoPort=34030, ipcPort=42716):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-26 12:22:04,869 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 42716
    [junit] 2011-03-26 12:22:04,869 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-26 12:22:04,869 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-26 12:22:04,869 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-26 12:22:04,870 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-26 12:22:04,882 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-26 12:22:04,882 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-26 12:22:04,882 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 3 3 
    [junit] 2011-03-26 12:22:04,884 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 51057
    [junit] 2011-03-26 12:22:04,884 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 51057: exiting
    [junit] 2011-03-26 12:22:04,884 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 51057
    [junit] 2011-03-26 12:22:04,884 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.623 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 48 minutes 37 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 125757, r=ReplicaInPipeline, blk_-6894539633202504254_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 65536   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-6894539633202504254   bytesAcked=0   bytesOnDisk=65536

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 125757, r=ReplicaInPipeline, blk_-6894539633202504254_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 65536
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-6894539633202504254
  bytesAcked=0
  bytesOnDisk=65536
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1387)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tgc(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n1310(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot138n(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk139m(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613da(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13e9(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813fi(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 617 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/617/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 706775 lines...]
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-25 12:23:16,150 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-25 12:23:16,151 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-25 12:23:16,151 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:36251, storageID=DS-468696142-127.0.1.1-36251-1301055785538, infoPort=55782, ipcPort=36232):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-25 12:23:16,151 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 36232
    [junit] 2011-03-25 12:23:16,151 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-25 12:23:16,151 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-25 12:23:16,152 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-25 12:23:16,152 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-25 12:23:16,152 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-25 12:23:16,254 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 52971
    [junit] 2011-03-25 12:23:16,254 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 52971: exiting
    [junit] 2011-03-25 12:23:16,254 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 52971
    [junit] 2011-03-25 12:23:16,254 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-03-25 12:23:16,255 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-25 12:23:16,255 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:41445, storageID=DS-1304287072-127.0.1.1-41445-1301055785353, infoPort=53886, ipcPort=52971):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-25 12:23:16,257 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-25 12:23:16,357 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-25 12:23:16,358 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:41445, storageID=DS-1304287072-127.0.1.1-41445-1301055785353, infoPort=53886, ipcPort=52971):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-25 12:23:16,358 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 52971
    [junit] 2011-03-25 12:23:16,358 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-25 12:23:16,358 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-25 12:23:16,359 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-25 12:23:16,359 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-25 12:23:16,460 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-25 12:23:16,460 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-25 12:23:16,461 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-03-25 12:23:16,463 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 36444
    [junit] 2011-03-25 12:23:16,463 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 36444: exiting
    [junit] 2011-03-25 12:23:16,463 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 36444
    [junit] 2011-03-25 12:23:16,463 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.347 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 49 minutes 52 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
7 tests failed.
FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n130s(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot138f(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk139e(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613d2(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13e1(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813fa(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 616 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/616/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 708649 lines...]
    [junit] 2011-03-24 12:22:35,345 INFO  datanode.DataNode (BlockReceiver.java:run(914)) - PacketResponder blk_6538719823285349735_1001 0 : Thread is interrupted.
    [junit] 2011-03-24 12:22:35,344 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 3
    [junit] 2011-03-24 12:22:35,344 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 34307
    [junit] 2011-03-24 12:22:35,345 INFO  datanode.DataNode (BlockReceiver.java:run(999)) - PacketResponder 0 for block blk_6538719823285349735_1001 terminating
    [junit] 2011-03-24 12:22:35,345 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(220)) - FI: blockFileClose, datanode=DatanodeRegistration(127.0.0.1:42931, storageID=DS-1663091717-127.0.1.1-42931-1300969344471, infoPort=45787, ipcPort=34307)
    [junit] 2011-03-24 12:22:35,346 ERROR datanode.DataNode (DataXceiver.java:run(132)) - DatanodeRegistration(127.0.0.1:42931, storageID=DS-1663091717-127.0.1.1-42931-1300969344471, infoPort=45787, ipcPort=34307):DataXceiver
    [junit] java.lang.RuntimeException: java.lang.InterruptedException: sleep interrupted
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:82)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:346)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:451)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:639)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:332)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-24 12:22:35,347 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-24 12:22:35,448 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-24 12:22:35,448 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:42931, storageID=DS-1663091717-127.0.1.1-42931-1300969344471, infoPort=45787, ipcPort=34307):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-24 12:22:35,448 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 34307
    [junit] 2011-03-24 12:22:35,448 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-24 12:22:35,449 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-24 12:22:35,449 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-24 12:22:35,449 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-24 12:22:35,551 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-24 12:22:35,551 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 3 
    [junit] 2011-03-24 12:22:35,551 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-24 12:22:35,553 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 44144
    [junit] 2011-03-24 12:22:35,553 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 44144: exiting
    [junit] 2011-03-24 12:22:35,554 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 44144
    [junit] 2011-03-24 12:22:35,554 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.558 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 48 minutes 29 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
7 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:257)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:119)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testURIPaths

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ltte1n130s(TestDFSShell.java:516)
	at org.apache.hadoop.hdfs.TestDFSShell.testURIPaths(TestDFSShell.java:449)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions

Error Message:
null expected:<[reptiles]> but was:<[supergroup]>

Stack Trace:
junit.framework.ComparisonFailure: null expected:<[reptiles]> but was:<[supergroup]>
	at org.apache.hadoop.hdfs.TestDFSShell.confirmOwner(TestDFSShell.java:846)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22e88ot138f(TestDFSShell.java:889)
	at org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:851)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testDFSShell

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2prqrtk139e(TestDFSShell.java:920)
	at org.apache.hadoop.hdfs.TestDFSShell.testDFSShell(TestDFSShell.java:916)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testRemoteException

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2ayein613d2(TestDFSShell.java:1143)
	at org.apache.hadoop.hdfs.TestDFSShell.testRemoteException(TestDFSShell.java:1136)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testGet

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_2tpje3v13e1(TestDFSShell.java:1182)
	at org.apache.hadoop.hdfs.TestDFSShell.testGet(TestDFSShell.java:1179)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testLsr

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1420)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:210)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDFSShell.__CLR3_0_22emby813fa(TestDFSShell.java:1240)
	at org.apache.hadoop.hdfs.TestDFSShell.testLsr(TestDFSShell.java:1238)




Hadoop-Hdfs-trunk - Build # 615 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/615/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6 lines...]
java.lang.NullPointerException
	at hudson.tasks.JavadocArchiver.perform(JavadocArchiver.java:94)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
	at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:644)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:623)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:601)
	at hudson.model.Build$RunnerImpl.post2(Build.java:159)
	at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:570)
	at hudson.model.Run.run(Run.java:1386)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:145)
Archiving artifacts
Recording test results
ERROR: Publisher hudson.tasks.junit.JUnitResultArchiver aborted due to exception
java.lang.NullPointerException
	at hudson.tasks.junit.JUnitParser.parse(JUnitParser.java:83)
	at hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:123)
	at hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:135)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
	at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:644)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:623)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:601)
	at hudson.model.Build$RunnerImpl.post2(Build.java:159)
	at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:570)
	at hudson.model.Run.run(Run.java:1386)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:145)
Recording fingerprints
ERROR: Unable to record fingerprints because there's no workspace
ERROR: Publisher hudson.plugins.violations.ViolationsPublisher aborted due to exception
java.lang.NullPointerException
	at hudson.plugins.violations.ViolationsPublisher.perform(ViolationsPublisher.java:74)
	at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36)
	at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:644)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:623)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:601)
	at hudson.model.Build$RunnerImpl.post2(Build.java:159)
	at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:570)
	at hudson.model.Run.run(Run.java:1386)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:145)
ERROR: Publisher hudson.plugins.clover.CloverPublisher aborted due to exception
java.lang.NullPointerException
	at hudson.plugins.clover.CloverPublisher.perform(CloverPublisher.java:137)
	at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36)
	at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:644)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:623)
	at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:601)
	at hudson.model.Build$RunnerImpl.post2(Build.java:159)
	at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:570)
	at hudson.model.Run.run(Run.java:1386)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:145)
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 614 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/614/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 729780 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-22 12:24:42,983 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-22 12:24:42,983 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-22 12:24:42,984 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:57260, storageID=DS-91605065-127.0.1.1-57260-1300796672283, infoPort=35711, ipcPort=34865):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-22 12:24:42,984 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 34865
    [junit] 2011-03-22 12:24:42,984 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-22 12:24:42,984 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-22 12:24:42,985 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-22 12:24:42,985 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-22 12:24:42,985 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-22 12:24:42,990 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 38522
    [junit] 2011-03-22 12:24:42,990 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-03-22 12:24:42,990 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-22 12:24:42,991 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 38522
    [junit] 2011-03-22 12:24:42,991 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:49868, storageID=DS-1067466969-127.0.1.1-49868-1300796672121, infoPort=48806, ipcPort=38522):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-22 12:24:42,993 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-22 12:24:42,995 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 38522: exiting
    [junit] 2011-03-22 12:24:43,093 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-22 12:24:43,093 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:49868, storageID=DS-1067466969-127.0.1.1-49868-1300796672121, infoPort=48806, ipcPort=38522):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-22 12:24:43,094 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 38522
    [junit] 2011-03-22 12:24:43,094 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-22 12:24:43,094 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-22 12:24:43,094 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-22 12:24:43,095 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-22 12:24:43,196 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-22 12:24:43,196 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 9 4 
    [junit] 2011-03-22 12:24:43,197 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-22 12:24:43,198 INFO  ipc.Server (Server.java:stop(1626)) - Stopping server on 34268
    [junit] 2011-03-22 12:24:43,198 INFO  ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 34268: exiting
    [junit] 2011-03-22 12:24:43,199 INFO  ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder
    [junit] 2011-03-22 12:24:43,199 INFO  ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 34268
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.379 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 51 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 95378, r=ReplicaInPipeline, blk_1818901318025178337_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_1818901318025178337   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 95378, r=ReplicaInPipeline, blk_1818901318025178337_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_1818901318025178337
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1387)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tg4(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)




Hadoop-Hdfs-trunk - Build # 613 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/613/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 706069 lines...]
    [junit] 
    [junit] 2011-03-21 12:22:28,346 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 38735
    [junit] 2011-03-21 12:22:28,348 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-21 12:22:28,348 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-21 12:22:28,348 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:40498, storageID=DS-388299994-127.0.1.1-40498-1300710137771, infoPort=47549, ipcPort=38735):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-21 12:22:28,349 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 38735
    [junit] 2011-03-21 12:22:28,349 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-21 12:22:28,349 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-21 12:22:28,349 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-21 12:22:28,350 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-21 12:22:28,350 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-21 12:22:28,451 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 51456
    [junit] 2011-03-21 12:22:28,452 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 51456: exiting
    [junit] 2011-03-21 12:22:28,452 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 51456
    [junit] 2011-03-21 12:22:28,452 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-21 12:22:28,452 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:43128, storageID=DS-2103166115-127.0.1.1-43128-1300710137598, infoPort=42993, ipcPort=51456):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-21 12:22:28,452 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-21 12:22:28,454 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-21 12:22:28,555 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-21 12:22:28,555 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:43128, storageID=DS-2103166115-127.0.1.1-43128-1300710137598, infoPort=42993, ipcPort=51456):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-21 12:22:28,555 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 51456
    [junit] 2011-03-21 12:22:28,556 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-21 12:22:28,556 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-21 12:22:28,556 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-21 12:22:28,556 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-21 12:22:28,658 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-21 12:22:28,658 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3 
    [junit] 2011-03-21 12:22:28,658 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-21 12:22:28,660 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 55720
    [junit] 2011-03-21 12:22:28,660 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 55720: exiting
    [junit] 2011-03-21 12:22:28,661 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 55720
    [junit] 2011-03-21 12:22:28,661 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.639 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 49 minutes 27 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 89896, r=ReplicaInPipeline, blk_-8157901986060962899_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-8157901986060962899   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 89896, r=ReplicaInPipeline, blk_-8157901986060962899_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-8157901986060962899
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1387)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2021)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tfi(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)




Hadoop-Hdfs-trunk - Build # 612 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/612/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 695456 lines...]
    [junit] 2011-03-20 12:22:42,323 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-20 12:22:42,324 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-20 12:22:42,324 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:49595, storageID=DS-765468382-127.0.1.1-49595-1300623751741, infoPort=37061, ipcPort=59644):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-20 12:22:42,324 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 59644
    [junit] 2011-03-20 12:22:42,324 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-20 12:22:42,324 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-20 12:22:42,325 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-20 12:22:42,325 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-20 12:22:42,325 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-20 12:22:42,427 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 56909
    [junit] 2011-03-20 12:22:42,427 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 56909: exiting
    [junit] 2011-03-20 12:22:42,427 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 56909
    [junit] 2011-03-20 12:22:42,427 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-20 12:22:42,427 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-20 12:22:42,427 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:42236, storageID=DS-841031619-127.0.1.1-42236-1300623751567, infoPort=38387, ipcPort=56909):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-20 12:22:42,430 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-20 12:22:42,530 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-20 12:22:42,530 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:42236, storageID=DS-841031619-127.0.1.1-42236-1300623751567, infoPort=38387, ipcPort=56909):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-20 12:22:42,531 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 56909
    [junit] 2011-03-20 12:22:42,531 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-20 12:22:42,531 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-20 12:22:42,531 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-20 12:22:42,532 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-20 12:22:42,633 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-20 12:22:42,633 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-20 12:22:42,634 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 3 
    [junit] 2011-03-20 12:22:42,635 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 40189
    [junit] 2011-03-20 12:22:42,635 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 40189: exiting
    [junit] 2011-03-20 12:22:42,636 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 40189
    [junit] 2011-03-20 12:22:42,636 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.667 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 49 minutes 43 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
	at org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.runTest29_30(TestFiDataTransferProtocol2.java:153)
	at org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29(TestFiDataTransferProtocol2.java:251)




Re: Hadoop-Hdfs-trunk - Build # 611 - Still Failing

Posted by Eli Collins <el...@cloudera.com>.
The TestFiRename test failure is HDFS-1770, which I've committed the fix for.


On Sat, Mar 19, 2011 at 5:31 AM, Apache Hudson Server
<hu...@hudson.apache.org> wrote:
> See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/611/
>
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE ###########################
> [...truncated 751238 lines...]
>    [junit] 2011-03-19 12:31:13,764 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
>    [junit] 2011-03-19 12:31:13,765 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
>    [junit] 2011-03-19 12:31:13,765 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:54442, storageID=DS-347111320-127.0.1.1-54442-1300537863165, infoPort=59494, ipcPort=38135):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
>    [junit] 2011-03-19 12:31:13,765 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 38135
>    [junit] 2011-03-19 12:31:13,765 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
>    [junit] 2011-03-19 12:31:13,766 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
>    [junit] 2011-03-19 12:31:13,766 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
>    [junit] 2011-03-19 12:31:13,766 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
>    [junit] 2011-03-19 12:31:13,767 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
>    [junit] 2011-03-19 12:31:13,868 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 48381
>    [junit] 2011-03-19 12:31:13,868 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 48381: exiting
>    [junit] 2011-03-19 12:31:13,869 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 48381
>    [junit] 2011-03-19 12:31:13,869 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
>    [junit] 2011-03-19 12:31:13,869 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:33450, storageID=DS-19390335-127.0.1.1-33450-1300537863003, infoPort=43863, ipcPort=48381):DataXceiveServer: java.nio.channels.AsynchronousCloseException
>    [junit]     at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
>    [junit]     at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
>    [junit]     at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>    [junit]     at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
>    [junit]     at java.lang.Thread.run(Thread.java:662)
>    [junit]
>    [junit] 2011-03-19 12:31:13,869 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
>    [junit] 2011-03-19 12:31:13,871 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
>    [junit] 2011-03-19 12:31:13,972 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
>    [junit] 2011-03-19 12:31:13,972 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:33450, storageID=DS-19390335-127.0.1.1-33450-1300537863003, infoPort=43863, ipcPort=48381):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
>    [junit] 2011-03-19 12:31:13,972 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 48381
>    [junit] 2011-03-19 12:31:13,972 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
>    [junit] 2011-03-19 12:31:13,972 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
>    [junit] 2011-03-19 12:31:13,973 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
>    [junit] 2011-03-19 12:31:13,973 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
>    [junit] 2011-03-19 12:31:14,074 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
>    [junit] 2011-03-19 12:31:14,075 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 9 2
>    [junit] 2011-03-19 12:31:14,075 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
>    [junit] 2011-03-19 12:31:14,076 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 53197
>    [junit] 2011-03-19 12:31:14,077 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 53197: exiting
>    [junit] 2011-03-19 12:31:14,077 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
>    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.524 sec
>    [junit] 2011-03-19 12:31:14,095 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 53197
>
> checkfailure:
>    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed
>
> BUILD FAILED
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!
>
> Total time: 58 minutes 8 seconds
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Publishing Javadoc
> Archiving artifacts
> Recording test results
> Recording fingerprints
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
> ###################################################################################
> ############################## FAILED TESTS (if any) ##############################
> 3 tests failed.
> REGRESSION:  org.apache.hadoop.fs.TestFiRename.testFailureNonExistentDst
>
> Error Message:
> Internal error: default blockSize is not a multiple of default bytesPerChecksum
>
> Stack Trace:
> java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum
>        at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:506)
>        at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:576)
>        at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:573)
>        at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2215)
>        at org.apache.hadoop.fs.FileContext.create(FileContext.java:573)
>        at org.apache.hadoop.fs.TestFiRename.createFile(TestFiRename.java:141)
>        at org.apache.hadoop.fs.TestFiRename.testFailureNonExistentDst(TestFiRename.java:152)
>
>
> REGRESSION:  org.apache.hadoop.fs.TestFiRename.testFailuresExistingDst
>
> Error Message:
> Internal error: default blockSize is not a multiple of default bytesPerChecksum
>
> Stack Trace:
> java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum
>        at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:506)
>        at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:576)
>        at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:573)
>        at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2215)
>        at org.apache.hadoop.fs.FileContext.create(FileContext.java:573)
>        at org.apache.hadoop.fs.TestFiRename.createFile(TestFiRename.java:141)
>        at org.apache.hadoop.fs.TestFiRename.testFailuresExistingDst(TestFiRename.java:168)
>
>
> REGRESSION:  org.apache.hadoop.fs.TestFiRename.testDeletionOfDstFile
>
> Error Message:
> Internal error: default blockSize is not a multiple of default bytesPerChecksum
>
> Stack Trace:
> java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum
>        at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:506)
>        at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:576)
>        at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:573)
>        at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2215)
>        at org.apache.hadoop.fs.FileContext.create(FileContext.java:573)
>        at org.apache.hadoop.fs.TestFiRename.createFile(TestFiRename.java:141)
>        at org.apache.hadoop.fs.TestFiRename.testDeletionOfDstFile(TestFiRename.java:189)
>
>
>
>

Hadoop-Hdfs-trunk - Build # 611 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/611/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 751238 lines...]
    [junit] 2011-03-19 12:31:13,764 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-19 12:31:13,765 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-19 12:31:13,765 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:54442, storageID=DS-347111320-127.0.1.1-54442-1300537863165, infoPort=59494, ipcPort=38135):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-03-19 12:31:13,765 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 38135
    [junit] 2011-03-19 12:31:13,765 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-19 12:31:13,766 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-19 12:31:13,766 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-19 12:31:13,766 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-19 12:31:13,767 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-19 12:31:13,868 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 48381
    [junit] 2011-03-19 12:31:13,868 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 48381: exiting
    [junit] 2011-03-19 12:31:13,869 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 48381
    [junit] 2011-03-19 12:31:13,869 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-19 12:31:13,869 WARN  datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:33450, storageID=DS-19390335-127.0.1.1-33450-1300537863003, infoPort=43863, ipcPort=48381):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-19 12:31:13,869 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-19 12:31:13,871 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-19 12:31:13,972 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-19 12:31:13,972 INFO  datanode.DataNode (DataNode.java:run(1464)) - DatanodeRegistration(127.0.0.1:33450, storageID=DS-19390335-127.0.1.1-33450-1300537863003, infoPort=43863, ipcPort=48381):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-19 12:31:13,972 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 48381
    [junit] 2011-03-19 12:31:13,972 INFO  datanode.DataNode (DataNode.java:shutdown(788)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-19 12:31:13,972 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-19 12:31:13,973 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-19 12:31:13,973 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-19 12:31:14,074 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2856)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-19 12:31:14,075 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 9 2 
    [junit] 2011-03-19 12:31:14,075 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-19 12:31:14,076 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 53197
    [junit] 2011-03-19 12:31:14,077 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 53197: exiting
    [junit] 2011-03-19 12:31:14,077 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.524 sec
    [junit] 2011-03-19 12:31:14,095 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 53197

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 58 minutes 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.fs.TestFiRename.testFailureNonExistentDst

Error Message:
Internal error: default blockSize is not a multiple of default bytesPerChecksum 

Stack Trace:
java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum 
	at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:506)
	at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:576)
	at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:573)
	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2215)
	at org.apache.hadoop.fs.FileContext.create(FileContext.java:573)
	at org.apache.hadoop.fs.TestFiRename.createFile(TestFiRename.java:141)
	at org.apache.hadoop.fs.TestFiRename.testFailureNonExistentDst(TestFiRename.java:152)


REGRESSION:  org.apache.hadoop.fs.TestFiRename.testFailuresExistingDst

Error Message:
Internal error: default blockSize is not a multiple of default bytesPerChecksum 

Stack Trace:
java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum 
	at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:506)
	at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:576)
	at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:573)
	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2215)
	at org.apache.hadoop.fs.FileContext.create(FileContext.java:573)
	at org.apache.hadoop.fs.TestFiRename.createFile(TestFiRename.java:141)
	at org.apache.hadoop.fs.TestFiRename.testFailuresExistingDst(TestFiRename.java:168)


REGRESSION:  org.apache.hadoop.fs.TestFiRename.testDeletionOfDstFile

Error Message:
Internal error: default blockSize is not a multiple of default bytesPerChecksum 

Stack Trace:
java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum 
	at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:506)
	at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:576)
	at org.apache.hadoop.fs.FileContext$2.next(FileContext.java:573)
	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2215)
	at org.apache.hadoop.fs.FileContext.create(FileContext.java:573)
	at org.apache.hadoop.fs.TestFiRename.createFile(TestFiRename.java:141)
	at org.apache.hadoop.fs.TestFiRename.testDeletionOfDstFile(TestFiRename.java:189)




Hadoop-Hdfs-trunk - Build # 610 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/610/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 702511 lines...]
    [junit] 2011-03-18 12:32:25,462 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-18 12:32:25,462 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-18 12:32:25,564 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 33248
    [junit] 2011-03-18 12:32:25,564 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 33248: exiting
    [junit] 2011-03-18 12:32:25,565 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 33248
    [junit] 2011-03-18 12:32:25,565 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-18 12:32:25,565 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:40492, storageID=DS-1069008913-127.0.1.1-40492-1300451534737, infoPort=48021, ipcPort=33248):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-18 12:32:25,565 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-18 12:32:25,567 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-18 12:32:25,668 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-18 12:32:25,668 INFO  datanode.DataNode (DataNode.java:run(1462)) - DatanodeRegistration(127.0.0.1:40492, storageID=DS-1069008913-127.0.1.1-40492-1300451534737, infoPort=48021, ipcPort=33248):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-18 12:32:25,668 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 33248
    [junit] 2011-03-18 12:32:25,668 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-18 12:32:25,669 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-18 12:32:25,669 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-18 12:32:25,669 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-18 12:32:25,771 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-18 12:32:25,771 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3 
    [junit] 2011-03-18 12:32:25,771 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-18 12:32:25,773 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 34987
    [junit] 2011-03-18 12:32:25,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 34987: exiting
    [junit] 2011-03-18 12:32:25,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 5 on 34987: exiting
    [junit] 2011-03-18 12:32:25,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 3 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 1 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 2 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 7 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 34987
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 8 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 9 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 6 on 34987: exiting
    [junit] 2011-03-18 12:32:25,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 4 on 34987: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.348 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 59 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw

Error Message:
65536 = numBytes < visible = 66996, r=ReplicaInPipeline, blk_-9037085292264431097_1001, TEMPORARY   getNumBytes()     = 65536   getBytesOnDisk()  = 0   getVisibleLength()= -1   getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized   getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-9037085292264431097   bytesAcked=0   bytesOnDisk=0

Stack Trace:
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 65536 = numBytes < visible = 66996, r=ReplicaInPipeline, blk_-9037085292264431097_1001, TEMPORARY
  getNumBytes()     = 65536
  getBytesOnDisk()  = 0
  getVisibleLength()= -1
  getVolume()       = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/current/finalized
  getBlockFile()    = /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data3/tmp/blk_-9037085292264431097
  bytesAcked=0
  bytesOnDisk=0
	at org.apache.hadoop.hdfs.server.datanode.FSDataset.convertTemporaryToRbw(FSDataset.java:1383)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.convertTemporaryToRbw(DataNode.java:2019)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.__CLR3_0_2r95sa9tep(TestTransferRbw.java:121)
	at org.apache.hadoop.hdfs.server.datanode.TestTransferRbw.testTransferRbw(TestTransferRbw.java:63)




Hadoop-Hdfs-trunk - Build # 609 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/609/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 704012 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-11592 / http-11593 / https-11594
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:11593
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.486 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.555 sec
   [cactus] Tomcat 5.x started on port [11593]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.311 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.343 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.875 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:49: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 51 minutes 16 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 608 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/608/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 686601 lines...]
    [junit] 2011-03-16 12:31:20,198 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-16 12:31:20,299 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 33960
    [junit] 2011-03-16 12:31:20,299 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 33960: exiting
    [junit] 2011-03-16 12:31:20,300 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 33960
    [junit] 2011-03-16 12:31:20,300 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-16 12:31:20,300 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:46068, storageID=DS-1231135676-127.0.1.1-46068-1300278669382, infoPort=34367, ipcPort=33960):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-16 12:31:20,300 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-16 12:31:20,398 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-16 12:31:20,401 INFO  datanode.DataNode (DataNode.java:run(1462)) - DatanodeRegistration(127.0.0.1:46068, storageID=DS-1231135676-127.0.1.1-46068-1300278669382, infoPort=34367, ipcPort=33960):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-16 12:31:20,401 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 33960
    [junit] 2011-03-16 12:31:20,401 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-16 12:31:20,401 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-16 12:31:20,402 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-16 12:31:20,402 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-16 12:31:20,504 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-16 12:31:20,504 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 4 
    [junit] 2011-03-16 12:31:20,504 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-16 12:31:20,505 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 52365
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 5 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 6 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 9 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 4 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 3 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 1 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 2 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 52365
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 7 on 52365: exiting
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 8 on 52365: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.653 sec
    [junit] 2011-03-16 12:31:20,506 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed!

Total time: 58 minutes 25 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


REGRESSION:  org.apache.hadoop.hdfs.TestDecommission.testHostsFile

Error Message:
Problem binding to /0.0.0.0:50020 : Address already in use

Stack Trace:
java.net.BindException: Problem binding to /0.0.0.0:50020 : Address already in use
	at org.apache.hadoop.ipc.Server.bind(Server.java:221)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:310)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1515)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
	at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:338)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:298)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:422)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:283)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:265)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1578)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1488)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestDecommission.__CLR3_0_2moi8ys10t5(TestDecommission.java:378)
	at org.apache.hadoop.hdfs.TestDecommission.testHostsFile(TestDecommission.java:375)
Caused by: java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind(Native Method)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
	at org.apache.hadoop.ipc.Server.bind(Server.java:219)




Hadoop-Hdfs-trunk - Build # 607 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/607/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 703396 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-57474 / http-57475 / https-57476
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:57475
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.476 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.34 sec
   [cactus] Tomcat 5.x started on port [57475]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.303 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.842 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 50 minutes 42 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 606 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/606/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 696403 lines...]
    [junit] 2011-03-14 12:25:38,462 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-14 12:25:38,462 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-14 12:25:38,563 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 50062
    [junit] 2011-03-14 12:25:38,564 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 50062: exiting
    [junit] 2011-03-14 12:25:38,564 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 50062
    [junit] 2011-03-14 12:25:38,564 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-14 12:25:38,564 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-14 12:25:38,564 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43047, storageID=DS-73264017-127.0.1.1-43047-1300105527845, infoPort=47732, ipcPort=50062):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-14 12:25:38,567 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-14 12:25:38,667 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-14 12:25:38,668 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:43047, storageID=DS-73264017-127.0.1.1-43047-1300105527845, infoPort=47732, ipcPort=50062):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-14 12:25:38,668 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 50062
    [junit] 2011-03-14 12:25:38,668 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-14 12:25:38,668 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-14 12:25:38,668 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-14 12:25:38,669 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-14 12:25:38,771 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-14 12:25:38,771 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 4 
    [junit] 2011-03-14 12:25:38,771 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 40816
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 4 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 9 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 2 on 40816: exiting
    [junit] 2011-03-14 12:25:38,774 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 6 on 40816: exiting
    [junit] 2011-03-14 12:25:38,774 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 40816
    [junit] 2011-03-14 12:25:38,774 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 8 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 1 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 3 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 5 on 40816: exiting
    [junit] 2011-03-14 12:25:38,773 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 7 on 40816: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.641 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 52 minutes 46 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182raa(TestBlockReport.java:451)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 605 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/605/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 724220 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-13 12:25:33,635 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-13 12:25:33,735 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-13 12:25:33,735 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:53216, storageID=DS-968011023-127.0.1.1-53216-1300019123122, infoPort=58954, ipcPort=58071):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-13 12:25:33,735 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 58071
    [junit] 2011-03-13 12:25:33,736 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-13 12:25:33,736 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-13 12:25:33,736 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-13 12:25:33,736 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-13 12:25:33,838 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-13 12:25:33,838 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 9 3 
    [junit] 2011-03-13 12:25:33,838 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-13 12:25:33,839 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 43712
    [junit] 2011-03-13 12:25:33,840 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 43712: exiting
    [junit] 2011-03-13 12:25:33,840 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 1 on 43712: exiting
    [junit] 2011-03-13 12:25:33,840 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 2 on 43712: exiting
    [junit] 2011-03-13 12:25:33,840 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 8 on 43712: exiting
    [junit] 2011-03-13 12:25:33,840 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 9 on 43712: exiting
    [junit] 2011-03-13 12:25:33,841 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 6 on 43712: exiting
    [junit] 2011-03-13 12:25:33,841 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 7 on 43712: exiting
    [junit] 2011-03-13 12:25:33,841 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 3 on 43712: exiting
    [junit] 2011-03-13 12:25:33,841 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-13 12:25:33,841 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 43712
    [junit] 2011-03-13 12:25:33,840 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 5 on 43712: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.281 sec
    [junit] 2011-03-13 12:25:33,847 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 4 on 43712: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 52 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182raa(TestBlockReport.java:451)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 604 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/604/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 704509 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-12 12:34:51,531 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-12 12:34:51,631 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-12 12:34:51,631 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:58228, storageID=DS-1627632864-127.0.1.1-58228-1299933280633, infoPort=53994, ipcPort=51584):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-12 12:34:51,631 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 51584
    [junit] 2011-03-12 12:34:51,632 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-12 12:34:51,632 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-12 12:34:51,632 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-12 12:34:51,632 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-12 12:34:51,734 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-12 12:34:51,734 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-12 12:34:51,734 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 4 
    [junit] 2011-03-12 12:34:51,736 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 50545
    [junit] 2011-03-12 12:34:51,736 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 1 on 50545: exiting
    [junit] 2011-03-12 12:34:51,736 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 50545: exiting
    [junit] 2011-03-12 12:34:51,736 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 2 on 50545: exiting
    [junit] 2011-03-12 12:34:51,737 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 3 on 50545: exiting
    [junit] 2011-03-12 12:34:51,737 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 4 on 50545: exiting
    [junit] 2011-03-12 12:34:51,737 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 6 on 50545: exiting
    [junit] 2011-03-12 12:34:51,737 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 5 on 50545: exiting
    [junit] 2011-03-12 12:34:51,737 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 7 on 50545: exiting
    [junit] 2011-03-12 12:34:51,738 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 8 on 50545: exiting
    [junit] 2011-03-12 12:34:51,738 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 9 on 50545: exiting
    [junit] 2011-03-12 12:34:51,738 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 50545
    [junit] 2011-03-12 12:34:51,739 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.593 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 61 minutes 40 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182raa(TestBlockReport.java:451)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 603 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/603/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 702030 lines...]
    [junit] 2011-03-11 12:32:49,243 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-11 12:32:49,243 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-11 12:32:49,244 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-11 12:32:49,354 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 55399
    [junit] 2011-03-11 12:32:49,355 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 55399: exiting
    [junit] 2011-03-11 12:32:49,355 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 55399
    [junit] 2011-03-11 12:32:49,355 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:48967, storageID=DS-1670833288-127.0.1.1-48967-1299846758473, infoPort=46766, ipcPort=55399):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-11 12:32:49,355 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-11 12:32:49,356 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-11 12:32:49,456 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-11 12:32:49,456 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:48967, storageID=DS-1670833288-127.0.1.1-48967-1299846758473, infoPort=46766, ipcPort=55399):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-11 12:32:49,457 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 55399
    [junit] 2011-03-11 12:32:49,457 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-11 12:32:49,457 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-11 12:32:49,457 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-11 12:32:49,458 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-11 12:32:49,559 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-11 12:32:49,559 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 2 
    [junit] 2011-03-11 12:32:49,559 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-11 12:32:49,561 INFO  ipc.Server (Server.java:stop(1624)) - Stopping server on 51176
    [junit] 2011-03-11 12:32:49,561 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 0 on 51176: exiting
    [junit] 2011-03-11 12:32:49,561 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 1 on 51176: exiting
    [junit] 2011-03-11 12:32:49,562 INFO  ipc.Server (Server.java:run(689)) - Stopping IPC Server Responder
    [junit] 2011-03-11 12:32:49,562 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 5 on 51176: exiting
    [junit] 2011-03-11 12:32:49,562 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 51176
    [junit] 2011-03-11 12:32:49,562 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 3 on 51176: exiting
    [junit] 2011-03-11 12:32:49,562 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 4 on 51176: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.441 sec
    [junit] 2011-03-11 12:32:49,570 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 2 on 51176: exiting
    [junit] 2011-03-11 12:32:49,570 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 8 on 51176: exiting
    [junit] 2011-03-11 12:32:49,570 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 7 on 51176: exiting
    [junit] 2011-03-11 12:32:49,570 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 6 on 51176: exiting
    [junit] 2011-03-11 12:32:49,570 INFO  ipc.Server (Server.java:run(1457)) - IPC Server handler 9 on 51176: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 59 minutes 54 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2j2e00jr9p(TestBlockReport.java:408)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08(TestBlockReport.java:390)


FAILED:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
127.0.0.1:37712is not an underUtilized node

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:37712is not an underUtilized node
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1011)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1496)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:307)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR3_0_29j3j5bsqr(TestBalancer.java:327)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)




Hadoop-Hdfs-trunk - Build # 602 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/602/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 712409 lines...]
    [junit] 2011-03-10 12:34:33,978 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-10 12:34:33,978 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-10 12:34:33,978 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-10 12:34:34,080 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 44610
    [junit] 2011-03-10 12:34:34,080 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 44610: exiting
    [junit] 2011-03-10 12:34:34,080 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 44610
    [junit] 2011-03-10 12:34:34,081 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:54622, storageID=DS-1811748547-127.0.1.1-54622-1299760463191, infoPort=37842, ipcPort=44610):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-10 12:34:34,081 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-10 12:34:34,080 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-10 12:34:34,181 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-10 12:34:34,182 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:54622, storageID=DS-1811748547-127.0.1.1-54622-1299760463191, infoPort=37842, ipcPort=44610):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-10 12:34:34,182 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 44610
    [junit] 2011-03-10 12:34:34,182 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-10 12:34:34,182 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-10 12:34:34,182 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-10 12:34:34,183 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-10 12:34:34,288 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-10 12:34:34,288 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-10 12:34:34,289 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 2 
    [junit] 2011-03-10 12:34:34,290 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 34507
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 34507
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 34507: exiting
    [junit] 2011-03-10 12:34:34,291 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 34507: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.752 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 61 minutes 24 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
127.0.0.1:44347is not an underUtilized node

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:44347is not an underUtilized node
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1011)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1496)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:307)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR3_0_29j3j5bsqp(TestBalancer.java:327)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)




Hadoop-Hdfs-trunk - Build # 601 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/601/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 709524 lines...]
    [junit] 2011-03-09 12:34:57,741 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-09 12:34:57,741 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-09 12:34:57,842 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 59292
    [junit] 2011-03-09 12:34:57,843 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 59292: exiting
    [junit] 2011-03-09 12:34:57,843 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 59292
    [junit] 2011-03-09 12:34:57,843 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-09 12:34:57,843 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:53741, storageID=DS-820295002-127.0.1.1-53741-1299674087032, infoPort=36626, ipcPort=59292):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-09 12:34:57,843 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-09 12:34:57,845 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-09 12:34:57,946 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-09 12:34:57,946 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:53741, storageID=DS-820295002-127.0.1.1-53741-1299674087032, infoPort=36626, ipcPort=59292):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-09 12:34:57,946 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 59292
    [junit] 2011-03-09 12:34:57,946 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-09 12:34:57,947 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-09 12:34:57,947 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-09 12:34:57,947 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-09 12:34:58,049 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-09 12:34:58,049 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 3 
    [junit] 2011-03-09 12:34:58,049 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-09 12:34:58,050 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 34632
    [junit] 2011-03-09 12:34:58,050 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 34632: exiting
    [junit] 2011-03-09 12:34:58,051 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 34632: exiting
    [junit] 2011-03-09 12:34:58,051 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-09 12:34:58,050 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 34632: exiting
    [junit] 2011-03-09 12:34:58,050 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 34632: exiting
    [junit] 2011-03-09 12:34:58,050 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 34632: exiting
    [junit] 2011-03-09 12:34:58,051 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 34632
    [junit] 2011-03-09 12:34:58,051 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 34632: exiting
    [junit] 2011-03-09 12:34:58,051 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 34632: exiting
    [junit] 2011-03-09 12:34:58,051 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 34632: exiting
    [junit] 2011-03-09 12:34:58,052 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 34632: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.812 sec
    [junit] 2011-03-09 12:34:58,055 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 34632: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed!

Total time: 61 minutes 34 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNodeCount.testNodeCount

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.hdfs.server.namenode.BlockManager.countNodes(BlockManager.java:1431)
	at org.apache.hadoop.hdfs.server.namenode.TestNodeCount.__CLR3_0_29bdgm6s8c(TestNodeCount.java:90)
	at org.apache.hadoop.hdfs.server.namenode.TestNodeCount.testNodeCount(TestNodeCount.java:40)




Hadoop-Hdfs-trunk - Build # 600 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/600/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 711758 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-14315 / http-14316 / https-14317
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:14316
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.514 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
   [cactus] Tomcat 5.x started on port [14316]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.323 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.831 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:753: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:734: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 52 minutes 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 599 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/599/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 719738 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-63314 / http-63315 / https-63316
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:63315
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.457 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.316 sec
   [cactus] Tomcat 5.x started on port [63315]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.329 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.33 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.825 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 52 minutes 4 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 598 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/598/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 704399 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-26393 / http-26394 / https-26395
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:26394
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.461 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.343 sec
   [cactus] Tomcat 5.x started on port [26394]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.315 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.328 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.825 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 52 minutes 23 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 597 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/597/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 702068 lines...]
    [junit] 2011-03-05 15:57:50,148 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-05 15:57:50,149 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-05 15:57:50,149 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-03-05 15:57:50,250 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 32794
    [junit] 2011-03-05 15:57:50,251 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 32794: exiting
    [junit] 2011-03-05 15:57:50,251 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 32794
    [junit] 2011-03-05 15:57:50,251 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:39757, storageID=DS-1671495469-127.0.1.1-39757-1299340659374, infoPort=41007, ipcPort=32794):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-03-05 15:57:50,251 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-05 15:57:50,251 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-03-05 15:57:50,354 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-05 15:57:50,354 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:39757, storageID=DS-1671495469-127.0.1.1-39757-1299340659374, infoPort=41007, ipcPort=32794):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-05 15:57:50,354 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 32794
    [junit] 2011-03-05 15:57:50,354 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-05 15:57:50,355 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-05 15:57:50,355 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-05 15:57:50,355 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-05 15:57:50,457 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-05 15:57:50,457 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 9 4 
    [junit] 2011-03-05 15:57:50,457 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-05 15:57:50,458 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 41611
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 41611: exiting
    [junit] 2011-03-05 15:57:50,459 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 41611: exiting
    [junit] 2011-03-05 15:57:50,460 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 41611
    [junit] 2011-03-05 15:57:50,460 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-05 15:57:50,461 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 41611: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.777 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 105 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testErrorReplicas

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-trunk - Build # 596 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/596/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 741127 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-52382 / http-52383 / https-52384
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:52383
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.45 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.343 sec
   [cactus] Tomcat 5.x started on port [52383]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.342 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.321 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.838 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 61 minutes 42 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 595 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/595/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 740316 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-03-03 12:26:37,364 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-03 12:26:37,464 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-03-03 12:26:37,464 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:39230, storageID=DS-147357001-127.0.1.1-39230-1299155186617, infoPort=40649, ipcPort=49091):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-03-03 12:26:37,464 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 49091
    [junit] 2011-03-03 12:26:37,465 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-03-03 12:26:37,465 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-03-03 12:26:37,465 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-03-03 12:26:37,466 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-03-03 12:26:37,567 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2854)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-03 12:26:37,567 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-03-03 12:26:37,568 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 3 
    [junit] 2011-03-03 12:26:37,569 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 33095
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 33095
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 33095: exiting
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 33095: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.469 sec
    [junit] 2011-03-03 12:26:37,570 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 33095: exiting
    [junit] 2011-03-03 12:26:37,571 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 33095: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 52 minutes 50 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Was waiting too long for a replica to become TEMPORARY

Stack Trace:
junit.framework.AssertionFailedError: Was waiting too long for a replica to become TEMPORARY
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.waitForTempReplica(TestBlockReport.java:514)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182rac(TestBlockReport.java:451)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 594 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/594/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 719188 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-41970 / http-41971 / https-41972
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:41971
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.624 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.345 sec
   [cactus] Tomcat 5.x started on port [41971]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.321 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.857 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 60 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 593 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/593/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 718385 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-56330 / http-56331 / https-56332
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:56331
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.466 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
   [cactus] Tomcat 5.x started on port [56331]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.332 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.342 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.886 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 51 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 592 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/592/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 716911 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-39686 / http-39687 / https-39688
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:39687
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.451 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.355 sec
   [cactus] Tomcat 5.x started on port [39687]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.31 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.316 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.878 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 61 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Re: Fwd: Hadoop-Hdfs-trunk - Build # 591 - Still Failing

Posted by Konstantin Boudnik <co...@apache.org>.
I have took a look at the test and I should note that the way they written if
kinda misleading. For instance the message we are seeing in the Hudson says 
  expected:<403> but was:<200>

where's the reality is that the expected was <200> and actual value was <403>.
Basically the order of assert calls is reversed in a number of places. While
this isn't a cause of the failure it is confuses the analysis.

That's be great to see a maintainer of this component to take a look at the
failures so we can eventually have a green HDFS bulld.

I have opened https://issues.apache.org/jira/browse/HDFS-1666 to track it

Cos

On Thu, Feb 24, 2011 at 10:14AM, Todd Lipcon wrote:
> Can someone familiar with hdfsproxy look into this consistent unit test
> failure? People voted in support of keeping this contrib, but it would be
> easier to be satisfied with that decision if someone stepped up to fix these
> tests that have been failing for quite some time.
> 
> -Todd
> 
> ---------- Forwarded message ----------
> From: Apache Hudson Server <hu...@hudson.apache.org>
> Date: Thu, Feb 24, 2011 at 4:36 AM
> Subject: Hadoop-Hdfs-trunk - Build # 591 - Still Failing
> To: hdfs-dev@hadoop.apache.org
> 
> 
> See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/591/
> 
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE
> ###########################
> [...truncated 719693 lines...]
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
>     [echo]  Including clover.jar in the war file ...
> [cactifywar] Analyzing war:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
> [cactifywar] Building war:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war
> 
> cactifywar:
> 
> test-cactus:
>     [echo]  Free Ports: startup-57271 / http-57272 / https-57273
>     [echo] Please take a deep breath while Cargo gets the Tomcat for running
> the servlet tests...
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
>     [copy] Copying 1 file to
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
>     [copy] Copying 1 file to
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
>     [copy] Copying 1 file to
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
>   [cactus] -----------------------------------------------------------------
>   [cactus] Running tests against Tomcat 5.x @ http://localhost:57272
>   [cactus] -----------------------------------------------------------------
>   [cactus] Deploying
> [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war]
> to
> [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
>   [cactus] Tomcat 5.x starting...
> Server [Apache-Coyote/1.1] started
>   [cactus] WARNING: multiple versions of ant detected in path for junit
>   [cactus]
>  jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
>   [cactus]      and
> jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
>   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
>   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.454 sec
>   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
>   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
>   [cactus] Tomcat 5.x started on port [57272]
>   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
>   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
>   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.347 sec
>   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
>   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.307 sec
>   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
>   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.024 sec
>   [cactus] Tomcat 5.x is stopping...
>   [cactus] Tomcat 5.x is stopped
> 
> BUILD FAILED
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343:
> Tests failed!
> 
> Total time: 59 minutes 43 seconds
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Publishing Javadoc
> Archiving artifacts
> Recording test results
> Recording fingerprints
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
> 
> 
> 
> ###################################################################################
> ############################## FAILED TESTS (if any)
> ##############################
> 2 tests failed.
> FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit
> 
> Error Message:
> expected:<403> but was:<200>
> 
> Stack Trace:
> junit.framework.AssertionFailedError: expected:<403> but was:<200>
>        at
> org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
>        at
> org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
>        at
> org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
>        at
> org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
>        at
> org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
>        at
> org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
> 
> 
> FAILED:
>  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified
> 
> Error Message:
> expected:<403> but was:<200>
> 
> Stack Trace:
> junit.framework.AssertionFailedError: expected:<403> but was:<200>
>        at
> org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
>        at
> org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
>        at
> org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
>        at
> org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
>        at
> org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
>        at
> org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)
> 
> 
> 
> 
> 
> 
> -- 
> Todd Lipcon
> Software Engineer, Cloudera

Fwd: Hadoop-Hdfs-trunk - Build # 591 - Still Failing

Posted by Todd Lipcon <to...@cloudera.com>.
Can someone familiar with hdfsproxy look into this consistent unit test
failure? People voted in support of keeping this contrib, but it would be
easier to be satisfied with that decision if someone stepped up to fix these
tests that have been failing for quite some time.

-Todd

---------- Forwarded message ----------
From: Apache Hudson Server <hu...@hudson.apache.org>
Date: Thu, Feb 24, 2011 at 4:36 AM
Subject: Hadoop-Hdfs-trunk - Build # 591 - Still Failing
To: hdfs-dev@hadoop.apache.org


See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/591/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE
###########################
[...truncated 719693 lines...]
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
    [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
    [echo]  Free Ports: startup-57271 / http-57272 / https-57273
    [echo] Please take a deep breath while Cargo gets the Tomcat for running
the servlet tests...
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
   [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
    [copy] Copying 1 file to
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [copy] Copying 1 file to
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [copy] Copying 1 file to
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
  [cactus] -----------------------------------------------------------------
  [cactus] Running tests against Tomcat 5.x @ http://localhost:57272
  [cactus] -----------------------------------------------------------------
  [cactus] Deploying
[/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war]
to
[/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
  [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
  [cactus] WARNING: multiple versions of ant detected in path for junit
  [cactus]
 jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
  [cactus]      and
jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
  [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
  [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.454 sec
  [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
  [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
  [cactus] Tomcat 5.x started on port [57272]
  [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
  [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
  [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.347 sec
  [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
  [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.307 sec
  [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
  [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.024 sec
  [cactus] Tomcat 5.x is stopping...
  [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750:
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731:
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48:
The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343:
Tests failed!

Total time: 59 minutes 43 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any)
##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
       at
org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
       at
org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
       at
org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
       at
org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
       at
org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
       at
org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:
 org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
       at
org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
       at
org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
       at
org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
       at
org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
       at
org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
       at
org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)






-- 
Todd Lipcon
Software Engineer, Cloudera

Hadoop-Hdfs-trunk - Build # 591 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/591/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 719693 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-57271 / http-57272 / https-57273
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:57272
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.454 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tomcat 5.x started on port [57272]
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.347 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.307 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.024 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 59 minutes 43 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 590 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/590/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 735848 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-13253 / http-13254 / https-13255
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:13254
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.477 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.341 sec
   [cactus] Tomcat 5.x started on port [13254]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.323 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.319 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.806 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 61 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 589 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/589/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 703645 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-45766 / http-45767 / https-45768
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:45767
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.476 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.315 sec
   [cactus] Tomcat 5.x started on port [45767]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.354 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.322 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.825 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 51 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 588 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/588/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 706271 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-45613 / http-45614 / https-45615
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:45614
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.443 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.338 sec
   [cactus] Tomcat 5.x started on port [45614]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.312 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.334 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.867 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 61 minutes 2 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 587 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/587/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 708813 lines...]
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target
     [echo]  Including clover.jar in the war file ...
[cactifywar] Analyzing war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/hdfsproxy-2.0-test.war
[cactifywar] Building war: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war

cactifywar:

test-cactus:
     [echo]  Free Ports: startup-22991 / http-22992 / https-22993
     [echo] Please take a deep breath while Cargo gets the Tomcat for running the servlet tests...
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/temp
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/logs
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/reports
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
     [copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -----------------------------------------------------------------
   [cactus] Running tests against Tomcat 5.x @ http://localhost:22992
   [cactus] -----------------------------------------------------------------
   [cactus] Deploying [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/test.war] to [/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/contrib/hdfsproxy/target/tomcat-config/webapps]...
   [cactus] Tomcat 5.x starting...
Server [Apache-Coyote/1.1] started
   [cactus] WARNING: multiple versions of ant detected in path for junit 
   [cactus]          jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
   [cactus]      and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
   [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter
   [cactus] Tests run: 4, Failures: 2, Errors: 0, Time elapsed: 0.465 sec
   [cactus] Test org.apache.hadoop.hdfsproxy.TestAuthorizationFilter FAILED
   [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.524 sec
   [cactus] Tomcat 5.x started on port [22992]
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.361 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet
   [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.325 sec
   [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil
   [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.899 sec
   [cactus] Tomcat 5.x is stopping...
   [cactus] Tomcat 5.x is stopped

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:750: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:731: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/build.xml:48: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:343: Tests failed!

Total time: 51 minutes 21 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermit

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermit(TestAuthorizationFilter.java:113)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)


FAILED:  org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.testPathPermitQualified

Error Message:
expected:<403> but was:<200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<403> but was:<200>
	at org.apache.hadoop.hdfsproxy.TestAuthorizationFilter.endPathPermitQualified(TestAuthorizationFilter.java:136)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callGenericEndMethod(ClientTestCaseCaller.java:442)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.callEndMethod(ClientTestCaseCaller.java:209)
	at org.apache.cactus.internal.client.ClientTestCaseCaller.runTest(ClientTestCaseCaller.java:149)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBareClient(AbstractCactusTestCase.java:218)
	at org.apache.cactus.internal.AbstractCactusTestCase.runBare(AbstractCactusTestCase.java:134)




Hadoop-Hdfs-trunk - Build # 586 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/586/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 647441 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-02-18 12:33:05,487 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-18 12:33:05,587 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-18 12:33:05,587 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:59724, storageID=DS-1057536237-127.0.1.1-59724-1298032374660, infoPort=45229, ipcPort=44897):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-18 12:33:05,588 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 44897
    [junit] 2011-02-18 12:33:05,588 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-18 12:33:05,588 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-18 12:33:05,588 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-18 12:33:05,589 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-18 12:33:05,690 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-18 12:33:05,690 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 7 3 
    [junit] 2011-02-18 12:33:05,690 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-18 12:33:05,692 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 46365
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 46365
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 46365: exiting
    [junit] 2011-02-18 12:33:05,693 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 46365: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.378 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 60 minutes 6 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
End of file reached before reading fully.

Stack Trace:
java.io.EOFException: End of file reached before reading fully.
	at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:73)
	at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:61)
	at org.apache.hadoop.hdfs.AppendTestUtil.checkFullFile(AppendTestUtil.java:159)
	at org.apache.hadoop.hdfs.TestHFlush.__CLR3_0_213w22tpd8(TestHFlush.java:273)
	at org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted(TestHFlush.java:216)


FAILED:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
	at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
	at sun.nio.ch.EPollArrayWrapper.<init>(EPollArrayWrapper.java:69)
	at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:52)
	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
	at java.nio.channels.Selector.open(Selector.java:209)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:316)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1513)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
	at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:338)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:298)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:422)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:283)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:265)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1576)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


FAILED:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


FAILED:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)




Hadoop-Hdfs-trunk - Build # 585 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/585/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 650617 lines...]
    [junit] 2011-02-17 12:32:38,770 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-17 12:32:38,770 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-17 12:32:38,872 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 43514
    [junit] 2011-02-17 12:32:38,872 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 43514: exiting
    [junit] 2011-02-17 12:32:38,872 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 43514
    [junit] 2011-02-17 12:32:38,872 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-02-17 12:32:38,872 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-17 12:32:38,873 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:37431, storageID=DS-1469273064-127.0.1.1-37431-1297945948029, infoPort=39656, ipcPort=43514):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-02-17 12:32:38,875 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-17 12:32:38,975 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-17 12:32:38,976 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:37431, storageID=DS-1469273064-127.0.1.1-37431-1297945948029, infoPort=39656, ipcPort=43514):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-17 12:32:38,976 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 43514
    [junit] 2011-02-17 12:32:38,976 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-17 12:32:38,976 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-17 12:32:38,976 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-17 12:32:38,977 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-17 12:32:39,078 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-17 12:32:39,078 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-17 12:32:39,078 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-02-17 12:32:39,080 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 54471
    [junit] 2011-02-17 12:32:39,080 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 54471: exiting
    [junit] 2011-02-17 12:32:39,080 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 54471: exiting
    [junit] 2011-02-17 12:32:39,080 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 54471
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 54471: exiting
    [junit] 2011-02-17 12:32:39,081 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 54471: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.473 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 59 minutes 25 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite

Error Message:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/current/VERSION (Too many open files)

Stack Trace:
java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/current/VERSION (Too many open files)
	at java.io.RandomAccessFile.open(Native Method)
	at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.write(Storage.java:265)
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.write(Storage.java:259)
	at org.apache.hadoop.hdfs.server.common.Storage.writeAll(Storage.java:806)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:714)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:692)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)

Stack Trace:
java.lang.RuntimeException: java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1546)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1411)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1357)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:600)
	at org.apache.hadoop.fs.FileSystem.setDefaultUri(FileSystem.java:162)
	at org.apache.hadoop.fs.FileSystem.setDefaultUri(FileSystem.java:170)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:449)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
Caused by: java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
	at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
	at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:653)
	at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:772)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
	at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
	at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:235)
	at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1460)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)

Stack Trace:
java.lang.RuntimeException: java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1546)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1411)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1357)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:600)
	at org.apache.hadoop.fs.FileSystem.setDefaultUri(FileSystem.java:162)
	at org.apache.hadoop.fs.FileSystem.setDefaultUri(FileSystem.java:170)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:449)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
Caused by: java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
	at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
	at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:653)
	at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:772)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
	at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
	at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:235)
	at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1460)




Hadoop-Hdfs-trunk - Build # 584 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/584/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 694186 lines...]
    [junit] 2011-02-16 12:33:33,046 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-16 12:33:33,046 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-16 12:33:33,048 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 45109
    [junit] 2011-02-16 12:33:33,048 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 45109: exiting
    [junit] 2011-02-16 12:33:33,049 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 45109
    [junit] 2011-02-16 12:33:33,049 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-16 12:33:33,049 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:40341, storageID=DS-622966542-127.0.1.1-40341-1297859602463, infoPort=55796, ipcPort=45109):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-02-16 12:33:33,049 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-02-16 12:33:33,051 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-16 12:33:33,152 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-16 12:33:33,152 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:40341, storageID=DS-622966542-127.0.1.1-40341-1297859602463, infoPort=55796, ipcPort=45109):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-16 12:33:33,152 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 45109
    [junit] 2011-02-16 12:33:33,152 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-16 12:33:33,153 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-16 12:33:33,153 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-16 12:33:33,153 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-16 12:33:33,255 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-16 12:33:33,255 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-16 12:33:33,256 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 4 
    [junit] 2011-02-16 12:33:33,257 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 39710
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 39710: exiting
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 39710: exiting
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 39710: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.347 sec
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-02-16 12:33:33,269 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 39710: exiting
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 39710: exiting
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 39710: exiting
    [junit] 2011-02-16 12:33:33,270 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 39710: exiting
    [junit] 2011-02-16 12:33:33,258 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 39710
    [junit] 2011-02-16 12:33:33,270 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 39710: exiting
    [junit] 2011-02-16 12:33:33,270 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 39710: exiting
    [junit] 2011-02-16 12:33:33,269 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 39710: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 60 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182r9s(TestBlockReport.java:457)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)




Hadoop-Hdfs-trunk - Build # 583 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/583/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 681243 lines...]
    [junit] 2011-02-15 12:22:31,972 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-15 12:22:31,973 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-15 12:22:32,074 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 50629
    [junit] 2011-02-15 12:22:32,074 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 50629: exiting
    [junit] 2011-02-15 12:22:32,074 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 50629
    [junit] 2011-02-15 12:22:32,074 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-15 12:22:32,075 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:60243, storageID=DS-121796189-127.0.1.1-60243-1297772541242, infoPort=50058, ipcPort=50629):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-02-15 12:22:32,074 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] 2011-02-15 12:22:32,077 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-15 12:22:32,177 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-15 12:22:32,178 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:60243, storageID=DS-121796189-127.0.1.1-60243-1297772541242, infoPort=50058, ipcPort=50629):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-15 12:22:32,178 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 50629
    [junit] 2011-02-15 12:22:32,178 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-15 12:22:32,178 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-15 12:22:32,178 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-15 12:22:32,179 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-15 12:22:32,189 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-15 12:22:32,189 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-15 12:22:32,189 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3 
    [junit] 2011-02-15 12:22:32,190 INFO  ipc.Server (Server.java:stop(1622)) - Stopping server on 41075
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 0 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 7 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 5 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 1 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 2 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 9 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 8 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 6 on 41075: exiting
    [junit] 2011-02-15 12:22:32,191 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 4 on 41075: exiting
    [junit] 2011-02-15 12:22:32,192 INFO  ipc.Server (Server.java:run(1455)) - IPC Server handler 3 on 41075: exiting
    [junit] 2011-02-15 12:22:32,194 INFO  ipc.Server (Server.java:run(485)) - Stopping IPC Server listener on 41075
    [junit] 2011-02-15 12:22:32,194 INFO  ipc.Server (Server.java:run(687)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.355 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 49 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
	at sun.nio.ch.IOUtil.initPipe(Native Method)
	at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
	at java.nio.channels.Selector.open(Selector.java:209)
	at org.apache.hadoop.ipc.Server$Responder.<init>(Server.java:614)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1522)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
	at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:338)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:298)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:422)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:283)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:265)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1576)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)




Hadoop-Hdfs-trunk - Build # 582 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/582/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 694693 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-02-14 12:23:40,358 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-14 12:23:40,459 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-14 12:23:40,459 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:51763, storageID=DS-1581103133-127.0.1.1-51763-1297686209769, infoPort=59454, ipcPort=44458):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-14 12:23:40,459 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 44458
    [junit] 2011-02-14 12:23:40,459 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-14 12:23:40,460 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-14 12:23:40,460 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-14 12:23:40,460 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-14 12:23:40,462 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-14 12:23:40,462 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 7 3 
    [junit] 2011-02-14 12:23:40,463 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 42776
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 42776: exiting
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 42776: exiting
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 42776: exiting
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 42776: exiting
    [junit] 2011-02-14 12:23:40,465 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 42776: exiting
    [junit] 2011-02-14 12:23:40,465 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 42776: exiting
    [junit] 2011-02-14 12:23:40,465 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-14 12:23:40,465 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 42776
    [junit] 2011-02-14 12:23:40,465 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 42776: exiting
    [junit] 2011-02-14 12:23:40,465 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 42776: exiting
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 42776: exiting
    [junit] 2011-02-14 12:23:40,464 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 42776: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.983 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 50 minutes 37 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2j2e00jr97(TestBlockReport.java:414)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_08(TestBlockReport.java:390)




Hadoop-Hdfs-trunk - Build # 581 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/581/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 653948 lines...]
    [junit] 2011-02-13 12:24:37,445 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-13 12:24:37,445 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-13 12:24:37,547 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 49354
    [junit] 2011-02-13 12:24:37,547 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 49354: exiting
    [junit] 2011-02-13 12:24:37,547 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-13 12:24:37,547 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:51501, storageID=DS-860255907-127.0.1.1-51501-1297599866704, infoPort=46020, ipcPort=49354):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-02-13 12:24:37,547 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-13 12:24:37,547 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 49354
    [junit] 2011-02-13 12:24:37,550 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-13 12:24:37,550 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-13 12:24:37,551 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:51501, storageID=DS-860255907-127.0.1.1-51501-1297599866704, infoPort=46020, ipcPort=49354):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-13 12:24:37,551 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 49354
    [junit] 2011-02-13 12:24:37,551 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-13 12:24:37,551 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-13 12:24:37,551 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-13 12:24:37,552 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-13 12:24:37,653 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-13 12:24:37,653 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 12 2 
    [junit] 2011-02-13 12:24:37,654 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-13 12:24:37,655 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 53844
    [junit] 2011-02-13 12:24:37,655 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 53844: exiting
    [junit] 2011-02-13 12:24:37,657 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 53844: exiting
    [junit] 2011-02-13 12:24:37,657 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 53844
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 53844: exiting
    [junit] 2011-02-13 12:24:37,656 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 53844: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.286 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 51 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
	at sun.nio.ch.IOUtil.initPipe(Native Method)
	at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
	at java.nio.channels.Selector.open(Selector.java:209)
	at org.apache.hadoop.ipc.Server$Responder.<init>(Server.java:602)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1510)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
	at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:338)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:298)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:422)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:283)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:265)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1576)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:315)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:302)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.__CLR3_0_2u5mf5tro5(TestFileConcurrentReader.java:275)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite(TestFileConcurrentReader.java:274)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)




Hadoop-Hdfs-trunk - Build # 580 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/580/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 673325 lines...]
    [junit] 2011-02-12 13:09:09,046 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-12 13:09:09,047 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-12 13:09:09,047 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-12 13:09:09,148 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 58457
    [junit] 2011-02-12 13:09:09,149 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 58457: exiting
    [junit] 2011-02-12 13:09:09,149 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 58457
    [junit] 2011-02-12 13:09:09,149 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-12 13:09:09,150 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43037, storageID=DS-1359992697-127.0.1.1-43037-1297516138283, infoPort=53748, ipcPort=58457):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 2011-02-12 13:09:09,150 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-12 13:09:09,250 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-12 13:09:09,251 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:43037, storageID=DS-1359992697-127.0.1.1-43037-1297516138283, infoPort=53748, ipcPort=58457):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-12 13:09:09,251 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 58457
    [junit] 2011-02-12 13:09:09,251 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-12 13:09:09,251 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-12 13:09:09,252 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-12 13:09:09,252 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-12 13:09:09,358 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-12 13:09:09,358 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-12 13:09:09,358 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 39861
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 39861: exiting
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 39861
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 39861: exiting
    [junit] 2011-02-12 13:09:09,363 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 39861: exiting
    [junit] 2011-02-12 13:09:09,363 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 39861: exiting
    [junit] 2011-02-12 13:09:09,363 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 39861: exiting
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 39861: exiting
    [junit] 2011-02-12 13:09:09,362 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 39861: exiting
    [junit] 2011-02-12 13:09:09,363 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 39861: exiting
    [junit] 2011-02-12 13:09:09,363 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 39861: exiting
    [junit] 2011-02-12 13:09:09,363 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 39861: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.694 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 97 minutes 2 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testErrorReplicas

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-trunk - Build # 579 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/579/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 667231 lines...]
    [junit] 2011-02-11 12:47:50,951 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-11 12:47:50,952 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-11 12:47:51,061 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 33356
    [junit] 2011-02-11 12:47:51,062 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 33356: exiting
    [junit] 2011-02-11 12:47:51,062 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-11 12:47:51,062 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:32838, storageID=DS-1763883780-127.0.1.1-32838-1297428460026, infoPort=54976, ipcPort=33356):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2011-02-11 12:47:51,062 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-11 12:47:51,062 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33356
    [junit] 2011-02-11 12:47:51,065 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-11 12:47:51,165 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-11 12:47:51,166 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:32838, storageID=DS-1763883780-127.0.1.1-32838-1297428460026, infoPort=54976, ipcPort=33356):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-11 12:47:51,166 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 33356
    [junit] 2011-02-11 12:47:51,166 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-11 12:47:51,167 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-11 12:47:51,167 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-11 12:47:51,169 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-11 12:47:51,271 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-11 12:47:51,271 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 3 
    [junit] 2011-02-11 12:47:51,271 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-11 12:47:51,272 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 57762
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 57762: exiting
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 57762: exiting
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 57762: exiting
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 57762: exiting
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 57762: exiting
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 57762: exiting
    [junit] 2011-02-11 12:47:51,273 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 57762: exiting
    [junit] 2011-02-11 12:47:51,274 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 57762: exiting
    [junit] 2011-02-11 12:47:51,274 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 57762: exiting
    [junit] 2011-02-11 12:47:51,274 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 57762: exiting
    [junit] 2011-02-11 12:47:51,274 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 57762
    [junit] 2011-02-11 12:47:51,274 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 37.153 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 73 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
	at sun.nio.ch.IOUtil.initPipe(Native Method)
	at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
	at java.nio.channels.Selector.open(Selector.java:209)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:318)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1501)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
	at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:338)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:298)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:422)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:283)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:265)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1576)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


FAILED:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


FAILED:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)




Hadoop-Hdfs-trunk - Build # 578 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/578/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 662998 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:390)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-02-10 12:47:13,695 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-10 12:47:13,709 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-10 12:47:13,795 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:51219, storageID=DS-1434452765-127.0.1.1-51219-1297342022685, infoPort=48801, ipcPort=37337):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-10 12:47:13,795 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 37337
    [junit] 2011-02-10 12:47:13,796 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-10 12:47:13,796 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-10 12:47:13,796 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-10 12:47:13,797 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-10 12:47:13,898 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-10 12:47:13,899 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 4 
    [junit] 2011-02-10 12:47:13,898 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-10 12:47:13,900 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 37835
    [junit] 2011-02-10 12:47:13,900 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 37835: exiting
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 37835: exiting
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37835
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 37835: exiting
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 37835: exiting
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 37835: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.823 sec
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 37835: exiting
    [junit] 2011-02-10 12:47:13,902 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 37835: exiting
    [junit] 2011-02-10 12:47:13,902 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 37835: exiting
    [junit] 2011-02-10 12:47:13,902 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 37835: exiting
    [junit] 2011-02-10 12:47:13,901 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 37835: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 69 minutes 17 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite

Error Message:
java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)

Stack Trace:
java.lang.RuntimeException: java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1546)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1411)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1357)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:600)
	at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:804)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:313)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:302)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.__CLR3_0_2u5mf5tro5(TestFileConcurrentReader.java:275)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite(TestFileConcurrentReader.java:274)
Caused by: java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/classes/hdfs-default.xml (Too many open files)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70)
	at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161)
	at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:653)
	at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:186)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
	at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107)
	at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:225)
	at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1460)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Error while running command to get file permissions : java.io.IOException: Cannot run program "/bin/ls": java.io.IOException: error=24, Too many open files  at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)  at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)  at org.apache.hadoop.util.Shell.run(Shell.java:188)  at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)  at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)  at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)  at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:571)  at org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:50)  at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:492)  at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:467)  at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)  at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148)  at org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1593)  at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1573)  at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)  at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)  at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)  at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)  at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)  at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)  at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)  at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)  at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)  at junit.framework.TestCase.runBare(TestCase.java:132)  at junit.framework.TestResult$1.protect(TestResult.java:110)  at junit.framework.TestResult.runProtected(TestResult.java:128)  at junit.framework.TestResult.run(TestResult.java:113)  at junit.framework.TestCase.run(TestCase.java:124)  at junit.framework.TestSuite.runTest(TestSuite.java:232)  at junit.framework.TestSuite.run(TestSuite.java:227)  at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)  at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)  at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)  at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)  at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) Caused by: java.io.IOException: java.io.IOException: error=24, Too many open files  at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)  at java.lang.ProcessImpl.start(ProcessImpl.java:65)  at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)  ... 34 more 

Stack Trace:
java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: Cannot run program "/bin/ls": java.io.IOException: error=24, Too many open files
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
	at org.apache.hadoop.util.Shell.run(Shell.java:188)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:381)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:467)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:450)
	at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:571)
	at org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:50)
	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:492)
	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:467)
	at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)
	at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1593)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1573)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
Caused by: java.io.IOException: java.io.IOException: error=24, Too many open files
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)

	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:517)
	at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:467)
	at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)
	at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1593)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1573)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)




Hadoop-Hdfs-trunk - Build # 577 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/577/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 703816 lines...]
    [junit] 2011-02-09 12:38:56,136 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-09 12:38:56,136 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-09 12:38:56,137 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-09 12:38:56,249 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 46799
    [junit] 2011-02-09 12:38:56,249 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 46799: exiting
    [junit] 2011-02-09 12:38:56,250 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 46799
    [junit] 2011-02-09 12:38:56,250 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-09 12:38:56,250 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:59361, storageID=DS-300555630-127.0.1.1-59361-1297255125241, infoPort=58635, ipcPort=46799):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2011-02-09 12:38:56,251 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-09 12:38:56,265 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-09 12:38:56,351 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:59361, storageID=DS-300555630-127.0.1.1-59361-1297255125241, infoPort=58635, ipcPort=46799):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-09 12:38:56,352 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 46799
    [junit] 2011-02-09 12:38:56,352 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-09 12:38:56,352 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-09 12:38:56,352 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-09 12:38:56,353 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-09 12:38:56,455 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-09 12:38:56,455 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-09 12:38:56,456 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 11 3 
    [junit] 2011-02-09 12:38:56,457 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 60968
    [junit] 2011-02-09 12:38:56,458 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 60968: exiting
    [junit] 2011-02-09 12:38:56,458 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 60968: exiting
    [junit] 2011-02-09 12:38:56,458 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 60968: exiting
    [junit] 2011-02-09 12:38:56,458 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 60968: exiting
    [junit] 2011-02-09 12:38:56,459 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 60968: exiting
    [junit] 2011-02-09 12:38:56,458 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-09 12:38:56,459 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 60968: exiting
    [junit] 2011-02-09 12:38:56,459 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 60968: exiting
    [junit] 2011-02-09 12:38:56,459 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 60968: exiting
    [junit] 2011-02-09 12:38:56,459 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 60968: exiting
    [junit] 2011-02-09 12:38:56,458 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 60968
    [junit] 2011-02-09 12:38:56,459 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 60968: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 37.138 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 64 minutes 1 second
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.testListCorruptFileBlocksInSafeMode

Error Message:
Namenode is not in safe mode

Stack Trace:
junit.framework.AssertionFailedError: Namenode is not in safe mode
	at org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.__CLR3_0_2mvj3yzpiu(TestListCorruptFileBlocks.java:241)
	at org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks.testListCorruptFileBlocksInSafeMode(TestListCorruptFileBlocks.java:132)




Hadoop-Hdfs-trunk - Build # 576 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/576/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 664793 lines...]
    [junit] 2011-02-08 12:51:19,626 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-08 12:51:19,627 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-08 12:51:19,737 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 37251
    [junit] 2011-02-08 12:51:19,737 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 37251: exiting
    [junit] 2011-02-08 12:51:19,738 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-08 12:51:19,738 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:55893, storageID=DS-1346001687-127.0.1.1-55893-1297169468694, infoPort=59523, ipcPort=37251):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2011-02-08 12:51:19,738 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37251
    [junit] 2011-02-08 12:51:19,738 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-08 12:51:19,738 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-08 12:51:19,750 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-08 12:51:19,850 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:55893, storageID=DS-1346001687-127.0.1.1-55893-1297169468694, infoPort=59523, ipcPort=37251):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-08 12:51:19,851 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 37251
    [junit] 2011-02-08 12:51:19,851 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-08 12:51:19,851 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-08 12:51:19,851 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-08 12:51:19,852 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-08 12:51:19,854 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-08 12:51:19,854 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 6 2 
    [junit] 2011-02-08 12:51:19,854 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-08 12:51:19,855 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 38969
    [junit] 2011-02-08 12:51:19,856 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 38969: exiting
    [junit] 2011-02-08 12:51:19,856 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 38969: exiting
    [junit] 2011-02-08 12:51:19,856 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 38969: exiting
    [junit] 2011-02-08 12:51:19,856 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 38969: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.68 sec
    [junit] 2011-02-08 12:51:19,859 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 38969: exiting
    [junit] 2011-02-08 12:51:19,860 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 38969: exiting
    [junit] 2011-02-08 12:51:19,860 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 38969: exiting
    [junit] 2011-02-08 12:51:19,860 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 38969: exiting
    [junit] 2011-02-08 12:51:19,860 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 38969: exiting
    [junit] 2011-02-08 12:51:19,860 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 38969: exiting
    [junit] 2011-02-08 12:51:19,861 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-08 12:51:19,864 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38969

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 76 minutes 44 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer

Error Message:
Too many open files

Stack Trace:
java.io.IOException: Too many open files
	at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
	at sun.nio.ch.EPollArrayWrapper.<init>(EPollArrayWrapper.java:68)
	at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:52)
	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
	at java.nio.channels.Selector.open(Selector.java:209)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:318)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:1501)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
	at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:338)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:298)
	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:422)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:513)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:283)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:265)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1576)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1519)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1486)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
	at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1

Error Message:
127.0.0.1:44994is not an underUtilized node

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:44994is not an underUtilized node
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1012)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:954)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1497)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnevenDistribution(TestBalancer.java:185)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR3_0_2cs3hxsso5(TestBalancer.java:335)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1(TestBalancer.java:332)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2

Error Message:
Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.formatOccurred(FSImage.java:1165)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:558)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:577)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1417)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:211)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:470)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:203)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:78)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerDefaultConstructor(TestBalancer.java:353)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR3_0_2g13gq9so9(TestBalancer.java:344)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2(TestBalancer.java:341)


FAILED:  org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-trunk - Build # 575 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/575/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 694602 lines...]
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$SleepAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:116)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$7$b9c2bffe(BlockReceiverAspects.aj:193)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:445)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:633)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:389)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:331)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] Caused by: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil.sleep(FiTestUtil.java:80)
    [junit] 	... 11 more
    [junit] 2011-02-07 13:05:09,027 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-07 13:05:09,114 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-07 13:05:09,128 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:39865, storageID=DS-1815951553-127.0.1.1-39865-1297083898075, infoPort=33846, ipcPort=59191):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-07 13:05:09,128 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 59191
    [junit] 2011-02-07 13:05:09,128 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-07 13:05:09,129 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-07 13:05:09,129 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-07 13:05:09,129 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-07 13:05:09,232 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-07 13:05:09,232 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-07 13:05:09,232 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 
    [junit] 2011-02-07 13:05:09,234 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 56703
    [junit] 2011-02-07 13:05:09,234 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 56703: exiting
    [junit] 2011-02-07 13:05:09,234 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 56703
    [junit] 2011-02-07 13:05:09,236 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 56703: exiting
    [junit] 2011-02-07 13:05:09,235 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 56703: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.667 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 90 minutes 36 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore

Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of dce7cc80c6da033edb1cb296c49a316e but expecting 0741d4a446340e1d20403fc4565760d7

Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of dce7cc80c6da033edb1cb296c49a316e but expecting 0741d4a446340e1d20403fc4565760d7
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:670)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:710)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:603)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:480)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4ubl(TestStorageRestore.java:316)
	at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)




Hadoop-Hdfs-trunk - Build # 574 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/574/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 684897 lines...]
    [junit] 2011-02-06 12:35:47,390 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-06 12:35:47,390 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0
    [junit] 2011-02-06 12:35:47,492 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 53480
    [junit] 2011-02-06 12:35:47,492 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 53480: exiting
    [junit] 2011-02-06 12:35:47,493 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 53480
    [junit] 2011-02-06 12:35:47,493 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-02-06 12:35:47,493 WARN  datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:36039, storageID=DS-1455553998-127.0.1.1-36039-1296995736550, infoPort=42382, ipcPort=53480):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2011-02-06 12:35:47,493 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-06 12:35:47,495 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-06 12:35:47,592 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-02-06 12:35:47,596 INFO  datanode.DataNode (DataNode.java:run(1460)) - DatanodeRegistration(127.0.0.1:36039, storageID=DS-1455553998-127.0.1.1-36039-1296995736550, infoPort=42382, ipcPort=53480):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-02-06 12:35:47,596 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 53480
    [junit] 2011-02-06 12:35:47,596 INFO  datanode.DataNode (DataNode.java:shutdown(786)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-02-06 12:35:47,597 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-02-06 12:35:47,597 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-02-06 12:35:47,597 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-02-06 12:35:47,700 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2847)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-06 12:35:47,700 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-02-06 12:35:47,700 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(559)) - Number of transactions: 6 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 5 3 
    [junit] 2011-02-06 12:35:47,702 INFO  ipc.Server (Server.java:stop(1610)) - Stopping server on 36868
    [junit] 2011-02-06 12:35:47,702 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 36868: exiting
    [junit] 2011-02-06 12:35:47,702 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 36868
    [junit] 2011-02-06 12:35:47,703 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 5 on 36868: exiting
    [junit] 2011-02-06 12:35:47,703 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 6 on 36868: exiting
    [junit] 2011-02-06 12:35:47,703 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 4 on 36868: exiting
    [junit] 2011-02-06 12:35:47,703 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 9 on 36868: exiting
    [junit] 2011-02-06 12:35:47,702 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 7 on 36868: exiting
    [junit] 2011-02-06 12:35:47,704 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 36868: exiting
    [junit] 2011-02-06 12:35:47,702 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-02-06 12:35:47,704 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 36868: exiting
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 36.334 sec
    [junit] 2011-02-06 12:35:47,703 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 8 on 36868: exiting
    [junit] 2011-02-06 12:35:47,702 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 36868: exiting

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:746: Tests failed!

Total time: 61 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09

Error Message:
Wrong number of PendingReplication blocks expected:<2> but was:<1>

Stack Trace:
junit.framework.AssertionFailedError: Wrong number of PendingReplication blocks expected:<2> but was:<1>
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.__CLR3_0_2fte182r7m(TestBlockReport.java:457)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockReport.blockReport_09(TestBlockReport.java:429)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore

Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 32306d67a4acbc6ccb3bb49e790cfd9c but expecting e2889bb9ee0079b53e1d6ad739ad3545

Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 32306d67a4acbc6ccb3bb49e790cfd9c but expecting e2889bb9ee0079b53e1d6ad739ad3545
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:670)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:710)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:603)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:480)
	at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4ubl(TestStorageRestore.java:316)
	at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)