You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2010/12/23 06:22:26 UTC
Hadoop-Hdfs-trunk-Commit - Build # 501 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/501/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete directory /homes/hudson/.ivy2/cache/org.apache.hadoop
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 507 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/507/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1369 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] [INFO] Uploading project information for hadoop-hdfs 0.23.0-20110106.180913-43
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
Total time: 69 minutes 24 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 506 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/506/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1367 lines...]
-compile-test-system.wrapper:
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
Total time: 31 minutes 40 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 505 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/505/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137035 lines...]
[junit] 2010-12-27 23:13:48,122 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,123 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 23:13:48,225 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 56721
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 23:13:48,227 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,228 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 23:13:48,228 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,330 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,330 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 8
[junit] 2010-12-27 23:13:48,330 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 54561: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.245 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4y(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 504 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/504/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144338 lines...]
[junit] 2010-12-27 22:24:13,132 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 22:24:13,245 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,246 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47521: exiting
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 22:24:13,249 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 22:24:13,250 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 22:24:13,362 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,362 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 7
[junit] 2010-12-27 22:24:13,362 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 53495: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.882 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 503 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/503/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139187 lines...]
[junit] 2010-12-26 05:36:20,712 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-26 05:36:20,814 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 49021: exiting
[junit] 2010-12-26 05:36:20,816 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,816 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-26 05:36:20,816 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-26 05:36:20,818 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-26 05:36:20,819 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-26 05:36:20,820 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-26 05:36:20,922 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,922 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 13 3
[junit] 2010-12-26 05:36:20,922 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,923 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44058
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44058
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,926 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 44058: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.034 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 502 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/502/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete file /homes/hudson/.ivy2/cache/org.apache.hadoop/avro/jars/.nfs00000000054240250000002b
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.