You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2010/12/16 21:04:05 UTC
Hadoop-Hdfs-trunk-Commit - Build # 492 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/492/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139404 lines...]
[junit] 2010-12-16 20:03:43,855 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-16 20:03:43,856 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43008
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 43008
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 20:03:43,857 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-16 20:03:43,857 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:58718, storageID=DS-586157729-127.0.1.1-58718-1292529823018, infoPort=51424, ipcPort=43008):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:58718, storageID=DS-586157729-127.0.1.1-58718-1292529823018, infoPort=51424, ipcPort=43008):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-16 20:03:43,861 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43008
[junit] 2010-12-16 20:03:43,861 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 20:03:43,861 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-16 20:03:43,861 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-16 20:03:43,861 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 20:03:43,963 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 20:03:43,963 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 20:03:43,963 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 8
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 51660
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 51660
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 51660: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.697 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 23543fd946cc0d08fa477d0015383abe but expecting 9849b11c80ca88a516a83ff9e1216a39
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 23543fd946cc0d08fa477d0015383abe but expecting 9849b11c80ca88a516a83ff9e1216a39
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 507 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/507/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1369 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] [INFO] Uploading project information for hadoop-hdfs 0.23.0-20110106.180913-43
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
Total time: 69 minutes 24 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 506 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/506/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1367 lines...]
-compile-test-system.wrapper:
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
Total time: 31 minutes 40 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 505 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/505/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137035 lines...]
[junit] 2010-12-27 23:13:48,122 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,123 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 23:13:48,225 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 56721
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 23:13:48,227 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,228 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 23:13:48,228 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,330 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,330 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 8
[junit] 2010-12-27 23:13:48,330 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 54561: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.245 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4y(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 504 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/504/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144338 lines...]
[junit] 2010-12-27 22:24:13,132 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 22:24:13,245 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,246 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47521: exiting
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 22:24:13,249 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 22:24:13,250 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 22:24:13,362 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,362 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 7
[junit] 2010-12-27 22:24:13,362 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 53495: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.882 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 503 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/503/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139187 lines...]
[junit] 2010-12-26 05:36:20,712 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-26 05:36:20,814 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 49021: exiting
[junit] 2010-12-26 05:36:20,816 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,816 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-26 05:36:20,816 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-26 05:36:20,818 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-26 05:36:20,819 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-26 05:36:20,820 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-26 05:36:20,922 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,922 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 13 3
[junit] 2010-12-26 05:36:20,922 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,923 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44058
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44058
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,926 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 44058: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.034 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 502 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/502/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete file /homes/hudson/.ivy2/cache/org.apache.hadoop/avro/jars/.nfs00000000054240250000002b
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 501 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/501/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete directory /homes/hudson/.ivy2/cache/org.apache.hadoop
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 500 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/500/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 148064 lines...]
[junit] 2010-12-22 04:48:21,922 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-22 04:48:21,922 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-22 04:48:22,025 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37929
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:41067, storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, ipcPort=37929):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37929
[junit] 2010-12-22 04:48:22,026 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 37929: exiting
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:41067, storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, ipcPort=37929):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-22 04:48:22,027 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37929
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-22 04:48:22,028 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-22 04:48:22,028 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-22 04:48:22,028 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-22 04:48:22,130 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,130 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 2Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 6 7
[junit] 2010-12-22 04:48:22,130 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,131 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 40184
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 40184
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 40184: exiting
[junit] 2010-12-22 04:48:22,133 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 40184: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.862 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of bd3f6219eddb98077bc18e6c12861710 but expecting 88d060e5de3c40f6ddcf4149957f9ec2
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of bd3f6219eddb98077bc18e6c12861710 but expecting 88d060e5de3c40f6ddcf4149957f9ec2
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 499 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/499/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1473 lines...]
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java:33: package InterfaceStability does not exist
[javac] @InterfaceStability.Evolving
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:147: cannot find symbol
[javac] symbol : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] HdfsLocatedFileStatus f, Path parent) {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:146: cannot find symbol
[javac] symbol : class LocatedFileStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] private LocatedFileStatus makeQualifiedLocated(
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:159: cannot find symbol
[javac] symbol : class FsStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public FsStatus getFsStatus() throws IOException {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:164: cannot find symbol
[javac] symbol : class FsServerDefaults
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public FsServerDefaults getServerDefaults() throws IOException {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:170: cannot find symbol
[javac] symbol : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] final Path p)
[javac] ^
[javac] Note: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 100 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:335: Compile failed; see the compiler error output for details.
Total time: 13 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 498 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/498/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 145407 lines...]
[junit] 2010-12-21 21:02:49,548 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47329
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47329
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 21:02:49,659 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:45847, storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, ipcPort=47329):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47329: exiting
[junit] 2010-12-21 21:02:49,661 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 21:02:49,662 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 21:02:49,662 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:45847, storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, ipcPort=47329):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 21:02:49,662 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47329
[junit] 2010-12-21 21:02:49,663 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 21:02:49,663 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 21:02:49,663 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 21:02:49,663 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 21:02:49,765 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 5
[junit] 2010-12-21 21:02:49,766 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33168
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33168
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 33168: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.926 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 8 minutes 50 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 3b4f6a8cfcd648671359d20cf5fede04 but expecting 42a5be8dc7187ce3f5af8891edc38f6e
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 3b4f6a8cfcd648671359d20cf5fede04 but expecting 42a5be8dc7187ce3f5af8891edc38f6e
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 497 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/497/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139268 lines...]
[junit] 2010-12-21 19:25:21,468 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:25:21,468 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:25:21,569 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38446
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38446
[junit] 2010-12-21 19:25:21,571 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,571 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:36691, storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, ipcPort=38446):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 19:25:21,571 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:36691, storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, ipcPort=38446):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:25:21,572 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38446
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:25:21,572 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 19:25:21,573 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 19:25:21,573 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:25:21,674 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,674 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,675 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 3
[junit] 2010-12-21 19:25:21,676 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 46353
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 46353
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 46353: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.667 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 18 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 62d7ef8ef25cbf3973a3306d4f41adf9 but expecting 2fdedf97f4e6044ee113c2dd30ecb287
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 62d7ef8ef25cbf3973a3306d4f41adf9 but expecting 2fdedf97f4e6044ee113c2dd30ecb287
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 496 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/496/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 143613 lines...]
[junit] 2010-12-21 19:03:38,419 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:03:38,419 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:03:38,520 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38397
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38397
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:34688, storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, ipcPort=38397):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 19:03:38,521 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38397: exiting
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:34688, storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, ipcPort=38397):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:03:38,522 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38397
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:03:38,523 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 19:03:38,523 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 19:03:38,523 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:03:38,625 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 10 6
[junit] 2010-12-21 19:03:38,626 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 42152
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 42152
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 42152: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.661 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of b56342fecc1aec17456b9ee1c243d2cd but expecting 2a921b0b2ea27a3eb547553c8a134cb7
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of b56342fecc1aec17456b9ee1c243d2cd but expecting 2a921b0b2ea27a3eb547553c8a134cb7
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 495 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/495/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 140929 lines...]
[junit] 2010-12-21 00:45:53,108 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 41550
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 41550: exiting
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 41550
[junit] 2010-12-21 00:45:53,209 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 41550: exiting
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 41550: exiting
[junit] 2010-12-21 00:45:53,209 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43800, storageID=DS-968232197-127.0.1.1-43800-1292892352218, infoPort=46970, ipcPort=41550):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 00:45:53,212 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:45:53,212 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 00:45:53,213 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:43800, storageID=DS-968232197-127.0.1.1-43800-1292892352218, infoPort=46970, ipcPort=41550):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 00:45:53,213 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 41550
[junit] 2010-12-21 00:45:53,213 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:45:53,213 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 00:45:53,213 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 00:45:53,213 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:45:53,315 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:45:53,315 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:45:53,315 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 5
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48472
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48472: exiting
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48472
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 48472: exiting
[junit] 2010-12-21 00:45:53,319 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 48472: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.837 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of f483a6a05343f3a5a905194d83379a3a but expecting 47b87cf59325e8317530bcfc0a6ffd58
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of f483a6a05343f3a5a905194d83379a3a but expecting 47b87cf59325e8317530bcfc0a6ffd58
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 494 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/494/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137288 lines...]
[junit] 2010-12-21 00:33:20,495 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:33:20,495 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 00:33:20,610 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38250
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38250: exiting
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38250: exiting
[junit] 2010-12-21 00:33:20,611 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:52644, storageID=DS-596423927-127.0.1.1-52644-1292891599452, infoPort=41489, ipcPort=38250):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38250: exiting
[junit] 2010-12-21 00:33:20,612 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:33:20,612 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38250
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:52644, storageID=DS-596423927-127.0.1.1-52644-1292891599452, infoPort=41489, ipcPort=38250):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 00:33:20,613 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38250
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:33:20,614 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 00:33:20,614 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 00:33:20,614 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:33:20,716 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:33:20,716 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:33:20,716 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 6 7
[junit] 2010-12-21 00:33:20,717 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47473
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 47473: exiting
[junit] 2010-12-21 00:33:20,719 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 47473: exiting
[junit] 2010-12-21 00:33:20,719 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47473
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.053 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9e4db606ea68a788476061e245b48fb7 but expecting 495b22a3bd50d4a318a6950277ae3411
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9e4db606ea68a788476061e245b48fb7 but expecting 495b22a3bd50d4a318a6950277ae3411
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 493 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/493/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 140756 lines...]
[junit] 2010-12-20 15:03:59,265 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-20 15:03:59,266 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-20 15:03:59,367 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52203
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 52203: exiting
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 52203
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-20 15:03:59,368 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33187, storageID=DS-1110103111-127.0.1.1-33187-1292857438383, infoPort=58795, ipcPort=52203):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-20 15:03:59,368 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 52203: exiting
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 52203: exiting
[junit] 2010-12-20 15:03:59,369 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-20 15:03:59,370 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33187, storageID=DS-1110103111-127.0.1.1-33187-1292857438383, infoPort=58795, ipcPort=52203):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-20 15:03:59,370 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52203
[junit] 2010-12-20 15:03:59,370 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-20 15:03:59,370 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-20 15:03:59,370 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-20 15:03:59,371 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-20 15:03:59,472 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-20 15:03:59,472 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-20 15:03:59,473 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 6
[junit] 2010-12-20 15:03:59,474 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 55985
[junit] 2010-12-20 15:03:59,474 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 55985
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 55985: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.937 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 20 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 710b09658f742cca03ff90d01ec82cc8 but expecting 332c0560c0bb8cc85333ebada5808ba0
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 710b09658f742cca03ff90d01ec82cc8 but expecting 332c0560c0bb8cc85333ebada5808ba0
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)