You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2010/11/25 00:05:44 UTC
Hadoop-Hdfs-trunk-Commit - Build # 470 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/470/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4086 lines...]
[junit] 2010-11-24 23:05:18,111 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-24 23:05:18,111 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-24 23:05:18,112 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-24 23:05:18,112 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-24 23:05:18,112 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-24 23:05:18,116 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-24 23:05:18,117 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-24 23:05:18,117 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-24 23:05:18,123 INFO common.Storage (FSImage.java:format(1639)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-24 23:05:18,124 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-24 23:05:18,124 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-24 23:05:18,125 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-24 23:05:18,125 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-24 23:05:18,125 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-24 23:05:18,130 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-24 23:05:18,130 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-24 23:05:18,131 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-24 23:05:18,131 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-24 23:05:18,132 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-24 23:05:18,132 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-24 23:05:18,134 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-24 23:05:18,134 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-24 23:05:18,135 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-24 23:05:18,136 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-24 23:05:18,137 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-24 23:05:18,137 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-24 23:05:18,137 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-24 23:05:18,138 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-24 23:05:18,138 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-24 23:05:18,139 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-24 23:05:18,140 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvuz(363)) - Running __CLR3_0_2q30srsvuz
[junit] 2010-11-24 23:05:18,141 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-24 23:05:18,141 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-24 23:05:18,142 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-24 23:05:18,149 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 11068806, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.517 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuokc(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1hoku(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbold(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513olw(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpom9(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxoml(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 507 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/507/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1369 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] [INFO] Uploading project information for hadoop-hdfs 0.23.0-20110106.180913-43
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
Total time: 69 minutes 24 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 506 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/506/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1367 lines...]
-compile-test-system.wrapper:
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
Total time: 31 minutes 40 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 505 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/505/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137035 lines...]
[junit] 2010-12-27 23:13:48,122 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,123 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 23:13:48,225 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 56721
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 23:13:48,227 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,228 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 23:13:48,228 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,330 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,330 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 8
[junit] 2010-12-27 23:13:48,330 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 54561: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.245 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4y(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 504 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/504/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144338 lines...]
[junit] 2010-12-27 22:24:13,132 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 22:24:13,245 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,246 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47521: exiting
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 22:24:13,249 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 22:24:13,250 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 22:24:13,362 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,362 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 7
[junit] 2010-12-27 22:24:13,362 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 53495: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.882 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 503 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/503/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139187 lines...]
[junit] 2010-12-26 05:36:20,712 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-26 05:36:20,814 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 49021: exiting
[junit] 2010-12-26 05:36:20,816 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,816 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-26 05:36:20,816 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-26 05:36:20,818 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-26 05:36:20,819 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-26 05:36:20,820 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-26 05:36:20,922 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,922 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 13 3
[junit] 2010-12-26 05:36:20,922 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,923 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44058
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44058
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,926 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 44058: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.034 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 502 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/502/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete file /homes/hudson/.ivy2/cache/org.apache.hadoop/avro/jars/.nfs00000000054240250000002b
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 501 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/501/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete directory /homes/hudson/.ivy2/cache/org.apache.hadoop
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 500 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/500/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 148064 lines...]
[junit] 2010-12-22 04:48:21,922 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-22 04:48:21,922 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-22 04:48:22,025 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37929
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:41067, storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, ipcPort=37929):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37929
[junit] 2010-12-22 04:48:22,026 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 37929: exiting
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:41067, storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, ipcPort=37929):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-22 04:48:22,027 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37929
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-22 04:48:22,028 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-22 04:48:22,028 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-22 04:48:22,028 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-22 04:48:22,130 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,130 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 2Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 6 7
[junit] 2010-12-22 04:48:22,130 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,131 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 40184
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 40184
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 40184: exiting
[junit] 2010-12-22 04:48:22,133 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 40184: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.862 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of bd3f6219eddb98077bc18e6c12861710 but expecting 88d060e5de3c40f6ddcf4149957f9ec2
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of bd3f6219eddb98077bc18e6c12861710 but expecting 88d060e5de3c40f6ddcf4149957f9ec2
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 499 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/499/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1473 lines...]
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java:33: package InterfaceStability does not exist
[javac] @InterfaceStability.Evolving
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:147: cannot find symbol
[javac] symbol : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] HdfsLocatedFileStatus f, Path parent) {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:146: cannot find symbol
[javac] symbol : class LocatedFileStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] private LocatedFileStatus makeQualifiedLocated(
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:159: cannot find symbol
[javac] symbol : class FsStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public FsStatus getFsStatus() throws IOException {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:164: cannot find symbol
[javac] symbol : class FsServerDefaults
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public FsServerDefaults getServerDefaults() throws IOException {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:170: cannot find symbol
[javac] symbol : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] final Path p)
[javac] ^
[javac] Note: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 100 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:335: Compile failed; see the compiler error output for details.
Total time: 13 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 498 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/498/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 145407 lines...]
[junit] 2010-12-21 21:02:49,548 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47329
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47329
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 21:02:49,659 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:45847, storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, ipcPort=47329):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47329: exiting
[junit] 2010-12-21 21:02:49,661 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 21:02:49,662 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 21:02:49,662 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:45847, storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, ipcPort=47329):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 21:02:49,662 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47329
[junit] 2010-12-21 21:02:49,663 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 21:02:49,663 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 21:02:49,663 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 21:02:49,663 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 21:02:49,765 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 5
[junit] 2010-12-21 21:02:49,766 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33168
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33168
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 33168: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.926 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 8 minutes 50 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 3b4f6a8cfcd648671359d20cf5fede04 but expecting 42a5be8dc7187ce3f5af8891edc38f6e
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 3b4f6a8cfcd648671359d20cf5fede04 but expecting 42a5be8dc7187ce3f5af8891edc38f6e
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 497 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/497/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139268 lines...]
[junit] 2010-12-21 19:25:21,468 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:25:21,468 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:25:21,569 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38446
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38446
[junit] 2010-12-21 19:25:21,571 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,571 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:36691, storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, ipcPort=38446):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 19:25:21,571 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:36691, storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, ipcPort=38446):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:25:21,572 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38446
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:25:21,572 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 19:25:21,573 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 19:25:21,573 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:25:21,674 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,674 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,675 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 3
[junit] 2010-12-21 19:25:21,676 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 46353
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 46353
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 46353: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.667 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 18 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 62d7ef8ef25cbf3973a3306d4f41adf9 but expecting 2fdedf97f4e6044ee113c2dd30ecb287
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 62d7ef8ef25cbf3973a3306d4f41adf9 but expecting 2fdedf97f4e6044ee113c2dd30ecb287
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 496 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/496/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 143613 lines...]
[junit] 2010-12-21 19:03:38,419 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:03:38,419 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:03:38,520 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38397
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38397
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:34688, storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, ipcPort=38397):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 19:03:38,521 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38397: exiting
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:34688, storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, ipcPort=38397):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:03:38,522 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38397
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:03:38,523 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 19:03:38,523 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 19:03:38,523 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:03:38,625 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 10 6
[junit] 2010-12-21 19:03:38,626 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 42152
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 42152
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 42152: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.661 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of b56342fecc1aec17456b9ee1c243d2cd but expecting 2a921b0b2ea27a3eb547553c8a134cb7
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of b56342fecc1aec17456b9ee1c243d2cd but expecting 2a921b0b2ea27a3eb547553c8a134cb7
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 495 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/495/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 140929 lines...]
[junit] 2010-12-21 00:45:53,108 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 41550
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 41550: exiting
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 41550
[junit] 2010-12-21 00:45:53,209 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 41550: exiting
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 41550: exiting
[junit] 2010-12-21 00:45:53,209 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43800, storageID=DS-968232197-127.0.1.1-43800-1292892352218, infoPort=46970, ipcPort=41550):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 00:45:53,212 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:45:53,212 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 00:45:53,213 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:43800, storageID=DS-968232197-127.0.1.1-43800-1292892352218, infoPort=46970, ipcPort=41550):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 00:45:53,213 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 41550
[junit] 2010-12-21 00:45:53,213 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:45:53,213 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 00:45:53,213 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 00:45:53,213 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:45:53,315 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:45:53,315 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:45:53,315 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 5
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48472
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48472: exiting
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48472
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 48472: exiting
[junit] 2010-12-21 00:45:53,319 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 48472: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.837 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of f483a6a05343f3a5a905194d83379a3a but expecting 47b87cf59325e8317530bcfc0a6ffd58
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of f483a6a05343f3a5a905194d83379a3a but expecting 47b87cf59325e8317530bcfc0a6ffd58
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 494 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/494/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137288 lines...]
[junit] 2010-12-21 00:33:20,495 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:33:20,495 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 00:33:20,610 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38250
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38250: exiting
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38250: exiting
[junit] 2010-12-21 00:33:20,611 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:52644, storageID=DS-596423927-127.0.1.1-52644-1292891599452, infoPort=41489, ipcPort=38250):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38250: exiting
[junit] 2010-12-21 00:33:20,612 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:33:20,612 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38250
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:52644, storageID=DS-596423927-127.0.1.1-52644-1292891599452, infoPort=41489, ipcPort=38250):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 00:33:20,613 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38250
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:33:20,614 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 00:33:20,614 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 00:33:20,614 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:33:20,716 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:33:20,716 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:33:20,716 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 6 7
[junit] 2010-12-21 00:33:20,717 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47473
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 47473: exiting
[junit] 2010-12-21 00:33:20,719 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 47473: exiting
[junit] 2010-12-21 00:33:20,719 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47473
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.053 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9e4db606ea68a788476061e245b48fb7 but expecting 495b22a3bd50d4a318a6950277ae3411
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9e4db606ea68a788476061e245b48fb7 but expecting 495b22a3bd50d4a318a6950277ae3411
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 493 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/493/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 140756 lines...]
[junit] 2010-12-20 15:03:59,265 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-20 15:03:59,266 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-20 15:03:59,367 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52203
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 52203: exiting
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 52203
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-20 15:03:59,368 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33187, storageID=DS-1110103111-127.0.1.1-33187-1292857438383, infoPort=58795, ipcPort=52203):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-20 15:03:59,368 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 52203: exiting
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 52203: exiting
[junit] 2010-12-20 15:03:59,369 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-20 15:03:59,370 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33187, storageID=DS-1110103111-127.0.1.1-33187-1292857438383, infoPort=58795, ipcPort=52203):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-20 15:03:59,370 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52203
[junit] 2010-12-20 15:03:59,370 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-20 15:03:59,370 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-20 15:03:59,370 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-20 15:03:59,371 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-20 15:03:59,472 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-20 15:03:59,472 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-20 15:03:59,473 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 6
[junit] 2010-12-20 15:03:59,474 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 55985
[junit] 2010-12-20 15:03:59,474 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 55985
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 55985: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.937 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 20 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 710b09658f742cca03ff90d01ec82cc8 but expecting 332c0560c0bb8cc85333ebada5808ba0
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 710b09658f742cca03ff90d01ec82cc8 but expecting 332c0560c0bb8cc85333ebada5808ba0
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 492 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/492/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139404 lines...]
[junit] 2010-12-16 20:03:43,855 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-16 20:03:43,856 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43008
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 43008
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 20:03:43,857 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-16 20:03:43,857 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:58718, storageID=DS-586157729-127.0.1.1-58718-1292529823018, infoPort=51424, ipcPort=43008):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:58718, storageID=DS-586157729-127.0.1.1-58718-1292529823018, infoPort=51424, ipcPort=43008):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-16 20:03:43,861 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43008
[junit] 2010-12-16 20:03:43,861 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 20:03:43,861 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-16 20:03:43,861 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-16 20:03:43,861 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 20:03:43,963 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 20:03:43,963 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 20:03:43,963 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 8
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 51660
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 51660
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 51660: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.697 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 23543fd946cc0d08fa477d0015383abe but expecting 9849b11c80ca88a516a83ff9e1216a39
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 23543fd946cc0d08fa477d0015383abe but expecting 9849b11c80ca88a516a83ff9e1216a39
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 491 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/491/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 133376 lines...]
[junit] 2010-12-16 19:43:24,559 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 19:43:24,559 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-16 19:43:24,561 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33691
[junit] 2010-12-16 19:43:24,561 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33691: exiting
[junit] 2010-12-16 19:43:24,562 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33691: exiting
[junit] 2010-12-16 19:43:24,562 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33691: exiting
[junit] 2010-12-16 19:43:24,562 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33691
[junit] 2010-12-16 19:43:24,563 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 19:43:24,563 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:46255, storageID=DS-234606803-127.0.1.1-46255-1292528603674, infoPort=52737, ipcPort=33691):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-16 19:43:24,563 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 19:43:24,564 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-16 19:43:24,564 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:46255, storageID=DS-234606803-127.0.1.1-46255-1292528603674, infoPort=52737, ipcPort=33691):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-16 19:43:24,564 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33691
[junit] 2010-12-16 19:43:24,564 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 19:43:24,565 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-16 19:43:24,565 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-16 19:43:24,565 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 19:43:24,675 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 19:43:24,675 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 19:43:24,675 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 4
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37453
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37453
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 37453: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.865 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 85d140eb152f07b333c271179251970d but expecting b6b6a7d89be0bc1cf946106cc78eacfe
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 85d140eb152f07b333c271179251970d but expecting b6b6a7d89be0bc1cf946106cc78eacfe
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 490 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/490/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 136703 lines...]
[junit] 2010-12-14 22:05:40,449 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 22:05:40,450 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54146
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54146: exiting
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54146
[junit] 2010-12-14 22:05:40,551 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:39160, storageID=DS-1617480765-127.0.1.1-39160-1292364339594, infoPort=45645, ipcPort=54146):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-14 22:05:40,551 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54146: exiting
[junit] 2010-12-14 22:05:40,552 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-14 22:05:40,552 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54146: exiting
[junit] 2010-12-14 22:05:40,553 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:39160, storageID=DS-1617480765-127.0.1.1-39160-1292364339594, infoPort=45645, ipcPort=54146):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-14 22:05:40,553 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54146
[junit] 2010-12-14 22:05:40,553 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 22:05:40,553 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-14 22:05:40,553 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-14 22:05:40,553 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 22:05:40,655 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 22:05:40,655 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 22:05:40,655 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 5
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54602
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 54602: exiting
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 54602: exiting
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54602
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 54602: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.818 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 1 second
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 993793371432de51679ded3aeccab03d but expecting d89f442914d49bd27045e22c447a5ffa
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 993793371432de51679ded3aeccab03d but expecting d89f442914d49bd27045e22c447a5ffa
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 489 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/489/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 143003 lines...]
[junit] 2010-12-14 21:53:27,287 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-14 21:53:27,388 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48952
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48952
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48952: exiting
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48952: exiting
[junit] 2010-12-14 21:53:27,389 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-14 21:53:27,390 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48952: exiting
[junit] 2010-12-14 21:53:27,390 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:41759, storageID=DS-1989182890-127.0.1.1-41759-1292363606407, infoPort=45129, ipcPort=48952):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-14 21:53:27,392 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 21:53:27,392 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-14 21:53:27,393 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:41759, storageID=DS-1989182890-127.0.1.1-41759-1292363606407, infoPort=45129, ipcPort=48952):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-14 21:53:27,393 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48952
[junit] 2010-12-14 21:53:27,393 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 21:53:27,393 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-14 21:53:27,393 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-14 21:53:27,393 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 21:53:27,495 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 21:53:27,495 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 21:53:27,495 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 7
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 45548
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 45548
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 45548: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.954 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 29 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 930091b46f62ba27b7bc0981530ac4d3 but expecting 4ed14e598349d48a2f4800088babbc6e
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 930091b46f62ba27b7bc0981530ac4d3 but expecting 4ed14e598349d48a2f4800088babbc6e
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4l(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 488 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/488/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 146286 lines...]
[junit] 2010-12-14 18:03:05,707 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 18:03:05,707 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44181
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44181: exiting
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44181: exiting
[junit] 2010-12-14 18:03:05,809 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:45160, storageID=DS-1161769845-127.0.1.1-45160-1292349784850, infoPort=45198, ipcPort=44181):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44181
[junit] 2010-12-14 18:03:05,809 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44181: exiting
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 18:03:05,810 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-14 18:03:05,811 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:45160, storageID=DS-1161769845-127.0.1.1-45160-1292349784850, infoPort=45198, ipcPort=44181):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-14 18:03:05,811 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44181
[junit] 2010-12-14 18:03:05,811 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 18:03:05,811 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-14 18:03:05,811 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-14 18:03:05,812 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 18:03:05,913 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 18:03:05,913 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 18:03:05,914 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 5
[junit] 2010-12-14 18:03:05,915 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 39821
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 39821
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 39821: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.814 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 5 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d80140c1c42e305c7e922044d12cc8c3 but expecting f0bf403db9f3d6c1a9f694599b49f015
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d80140c1c42e305c7e922044d12cc8c3 but expecting f0bf403db9f3d6c1a9f694599b49f015
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r3q(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 487 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/487/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144861 lines...]
[junit] 2010-12-10 07:25:41,214 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-10 07:25:41,214 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33651
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33651: exiting
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33651
[junit] 2010-12-10 07:25:41,316 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43287, storageID=DS-1896691708-127.0.1.1-43287-1291965940384, infoPort=50373, ipcPort=33651):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33651: exiting
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33651: exiting
[junit] 2010-12-10 07:25:41,317 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-10 07:25:41,317 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-10 07:25:41,317 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:43287, storageID=DS-1896691708-127.0.1.1-43287-1291965940384, infoPort=50373, ipcPort=33651):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-10 07:25:41,318 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33651
[junit] 2010-12-10 07:25:41,318 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-10 07:25:41,318 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-10 07:25:41,318 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-10 07:25:41,318 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-10 07:25:41,420 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-10 07:25:41,420 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 7
[junit] 2010-12-10 07:25:41,420 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 60200
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 60200
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 60200: exiting
[junit] 2010-12-10 07:25:41,423 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 60200: exiting
[junit] 2010-12-10 07:25:41,423 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 60200: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.762 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d16064892c28373e3f7f112c07e982dc but expecting 779cad55065df8db44670bf8766cf5f9
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d16064892c28373e3f7f112c07e982dc but expecting 779cad55065df8db44670bf8766cf5f9
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r3u(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 486 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/486/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 145123 lines...]
[junit] 2010-12-09 23:53:14,616 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-09 23:53:14,717 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48456
[junit] 2010-12-09 23:53:14,717 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48456: exiting
[junit] 2010-12-09 23:53:14,717 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48456: exiting
[junit] 2010-12-09 23:53:14,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48456: exiting
[junit] 2010-12-09 23:53:14,718 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48456
[junit] 2010-12-09 23:53:14,719 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 23:53:14,719 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-09 23:53:14,719 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:40062, storageID=DS-17273153-127.0.1.1-40062-1291938793750, infoPort=44013, ipcPort=48456):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-09 23:53:14,721 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 23:53:14,721 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-09 23:53:14,722 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:40062, storageID=DS-17273153-127.0.1.1-40062-1291938793750, infoPort=44013, ipcPort=48456):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-09 23:53:14,722 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48456
[junit] 2010-12-09 23:53:14,722 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 23:53:14,722 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-09 23:53:14,722 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-09 23:53:14,723 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-09 23:53:14,824 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 23:53:14,824 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 23:53:14,825 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 3
[junit] 2010-12-09 23:53:14,826 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 57180
[junit] 2010-12-09 23:53:14,826 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 57180
[junit] 2010-12-09 23:53:14,826 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 57180: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.842 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 6d9de0703efa444f45d324aee7f5f7ba but expecting 89541f158bf0bb5aca2c5f3657263d95
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 6d9de0703efa444f45d324aee7f5f7ba but expecting 89541f158bf0bb5aca2c5f3657263d95
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r3w(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 485 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/485/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144785 lines...]
[junit] 2010-12-09 19:43:21,826 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43071
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 43071: exiting
[junit] 2010-12-09 19:43:21,927 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 43071: exiting
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 43071
[junit] 2010-12-09 19:43:21,928 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 43071: exiting
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 19:43:21,928 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:39312, storageID=DS-1841765039-127.0.1.1-39312-1291923800916, infoPort=45410, ipcPort=43071):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-09 19:43:21,930 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 19:43:21,930 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-09 19:43:21,931 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:39312, storageID=DS-1841765039-127.0.1.1-39312-1291923800916, infoPort=45410, ipcPort=43071):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-09 19:43:21,931 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43071
[junit] 2010-12-09 19:43:21,931 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 19:43:21,931 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-09 19:43:21,931 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-09 19:43:21,931 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-09 19:43:21,933 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 19:43:21,933 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 5
[junit] 2010-12-09 19:43:21,933 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 34916
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 34916
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 34916: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.572 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 8 minutes 56 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 33107cba943a561d1044566e0043c67e but expecting f77fbcfef771fee5aaf6ee76d1257847
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 33107cba943a561d1044566e0043c67e but expecting f77fbcfef771fee5aaf6ee76d1257847
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1p(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 484 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/484/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139707 lines...]
[junit] 2010-12-08 07:30:15,926 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-08 07:30:15,926 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-08 07:30:16,028 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52270
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 52270: exiting
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 52270: exiting
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 52270: exiting
[junit] 2010-12-08 07:30:16,029 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:37921, storageID=DS-866126237-127.0.1.1-37921-1291793415028, infoPort=42726, ipcPort=52270):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-08 07:30:16,029 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 07:30:16,030 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 52270
[junit] 2010-12-08 07:30:16,030 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:37921, storageID=DS-866126237-127.0.1.1-37921-1291793415028, infoPort=42726, ipcPort=52270):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-08 07:30:16,030 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52270
[junit] 2010-12-08 07:30:16,031 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-08 07:30:16,031 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-08 07:30:16,031 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-08 07:30:16,031 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-08 07:30:16,133 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 07:30:16,133 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 07:30:16,134 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 3Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 8
[junit] 2010-12-08 07:30:16,135 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 45830
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 45830: exiting
[junit] 2010-12-08 07:30:16,137 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 45830: exiting
[junit] 2010-12-08 07:30:16,137 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 45830
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 45830: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 14.975 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 46 minutes 57 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 38110de9f9d25d861cbc6cc4ff8c872c but expecting 251f3ab2efd7f1fd3feeb9b656d244b3
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 38110de9f9d25d861cbc6cc4ff8c872c but expecting 251f3ab2efd7f1fd3feeb9b656d244b3
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1p(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 483 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/483/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 147582 lines...]
[junit] 2010-12-08 06:38:21,796 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 36829
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 36829: exiting
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 36829: exiting
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 36829: exiting
[junit] 2010-12-08 06:38:21,913 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 36829
[junit] 2010-12-08 06:38:21,913 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 06:38:21,913 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-08 06:38:21,913 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:46398, storageID=DS-233984618-127.0.1.1-46398-1291790300916, infoPort=58049, ipcPort=36829):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-08 06:38:21,915 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-08 06:38:21,916 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-08 06:38:21,916 INFO datanode.DataNode (DataNode.java:run(1442)) - DatanodeRegistration(127.0.0.1:46398, storageID=DS-233984618-127.0.1.1-46398-1291790300916, infoPort=58049, ipcPort=36829):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-08 06:38:21,916 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 36829
[junit] 2010-12-08 06:38:21,917 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-08 06:38:21,917 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-08 06:38:21,917 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-08 06:38:21,918 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-08 06:38:22,020 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 06:38:22,020 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 3Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 7
[junit] 2010-12-08 06:38:22,020 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 06:38:22,021 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 36341
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 36341: exiting
[junit] 2010-12-08 06:38:22,023 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 06:38:22,023 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 36341
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 36341: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 101.505 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 43 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 1d07b190fa86f83dec14b6f09f4be0b0 but expecting c465b6d13b27ceb1695ec8fda2737627
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 1d07b190fa86f83dec14b6f09f4be0b0 but expecting c465b6d13b27ceb1695ec8fda2737627
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1n(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 482 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/482/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1038 lines...]
[ivy:resolve] .............................................................................................................................................................................................................................. (331kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar (1502ms)
[ivy:resolve]
[ivy:resolve] :: problems summary ::
[ivy:resolve] :::: WARNINGS
[ivy:resolve] module not found: org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT
[ivy:resolve] ==== apache-snapshot: tried
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar:
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] ==== maven2: tried
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar:
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: UNRESOLVED DEPENDENCIES ::
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT: not found
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :::: ERRORS
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/maven-metadata.xml
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2.pom
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2.jar
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2-sources.jar
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2-javadoc.jar
[ivy:resolve]
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1716: impossible to resolve dependencies:
resolve failed - see output for details
Total time: 10 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 481 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/481/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 152903 lines...]
[junit] 2010-12-07 08:44:36,723 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54229
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54229: exiting
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54229: exiting
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54229: exiting
[junit] 2010-12-07 08:44:36,826 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54229
[junit] 2010-12-07 08:44:36,826 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-07 08:44:36,826 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-07 08:44:36,826 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:59864, storageID=DS-1317215496-127.0.1.1-59864-1291711475838, infoPort=60715, ipcPort=54229):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-07 08:44:36,828 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-07 08:44:36,829 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-07 08:44:36,829 INFO datanode.DataNode (DataNode.java:run(1442)) - DatanodeRegistration(127.0.0.1:59864, storageID=DS-1317215496-127.0.1.1-59864-1291711475838, infoPort=60715, ipcPort=54229):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-07 08:44:36,829 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54229
[junit] 2010-12-07 08:44:36,829 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-07 08:44:36,830 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-07 08:44:36,830 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-07 08:44:36,830 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-07 08:44:36,942 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-07 08:44:36,942 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-07 08:44:36,943 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 8
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47954
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47954: exiting
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47954
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 47954: exiting
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 47954: exiting
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 47954: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 31.549 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 48 minutes 52 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 028f4b400a0f02aaace6ca8713c33f8e but expecting ee62ee5e0aa3186b2f7e7cc8028ec445
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 028f4b400a0f02aaace6ca8713c33f8e but expecting ee62ee5e0aa3186b2f7e7cc8028ec445
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1m(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 480 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/480/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 150884 lines...]
[junit] 2010-12-06 05:39:01,983 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-06 05:39:01,983 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-06 05:39:02,085 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44565
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44565: exiting
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44565: exiting
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44565
[junit] 2010-12-06 05:39:02,086 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:58128, storageID=DS-1524592330-127.0.1.1-58128-1291613941001, infoPort=38453, ipcPort=44565):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-06 05:39:02,086 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44565: exiting
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-06 05:39:02,087 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-06 05:39:02,087 INFO datanode.DataNode (DataNode.java:run(1442)) - DatanodeRegistration(127.0.0.1:58128, storageID=DS-1524592330-127.0.1.1-58128-1291613941001, infoPort=38453, ipcPort=44565):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-06 05:39:02,088 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44565
[junit] 2010-12-06 05:39:02,088 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-06 05:39:02,088 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-06 05:39:02,088 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-06 05:39:02,089 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-06 05:39:02,190 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-06 05:39:02,190 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-06 05:39:02,191 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 12 6
[junit] 2010-12-06 05:39:02,192 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 58063
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 58063
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 58063: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 128.781 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 52 minutes 28 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d28e4d9e9984bf03a84c64f929bee64e but expecting 74655f21050167571fffcde53aea434c
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d28e4d9e9984bf03a84c64f929bee64e but expecting 74655f21050167571fffcde53aea434c
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1k(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 479 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/479/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-12-04 00:45:28,014 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-04 00:45:28,015 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-04 00:45:28,015 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-04 00:45:28,023 INFO common.Storage (FSImageFormat.java:write(474)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-04 00:45:28,026 INFO common.Storage (FSImageFormat.java:write(498)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-04 00:45:28,027 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-04 00:45:28,027 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-04 00:45:28,032 INFO common.Storage (FSImage.java:format(1339)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-04 00:45:28,034 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-04 00:45:28,034 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-04 00:45:28,034 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-04 00:45:28,035 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-04 00:45:28,035 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-04 00:45:28,038 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-04 00:45:28,039 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-04 00:45:28,039 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-04 00:45:28,039 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-04 00:45:28,040 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-04 00:45:28,040 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-04 00:45:28,041 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-04 00:45:28,041 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-04 00:45:28,042 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-04 00:45:28,043 INFO common.Storage (FSImageFormat.java:load(171)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-04 00:45:28,043 INFO common.Storage (FSImageFormat.java:load(174)) - Number of files = 1
[junit] 2010-12-04 00:45:28,044 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(342)) - Number of files under construction = 0
[junit] 2010-12-04 00:45:28,044 INFO common.Storage (FSImageFormat.java:load(195)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-04 00:45:28,044 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-04 00:45:28,045 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-04 00:45:28,045 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 13 msecs
[junit] 2010-12-04 00:45:28,046 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-04 00:45:28,046 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvxq(363)) - Running __CLR3_0_2q30srsvxq
[junit] 2010-12-04 00:45:28,047 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-04 00:45:28,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-04 00:45:28,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-04 00:45:28,055 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 2592387, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.424 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuon3(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1honl(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcboo4(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513oon(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpop0(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxopc(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 478 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/478/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-12-03 21:45:08,188 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-03 21:45:08,188 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-03 21:45:08,189 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-03 21:45:08,198 INFO common.Storage (FSImageFormat.java:write(474)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-03 21:45:08,201 INFO common.Storage (FSImageFormat.java:write(498)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-03 21:45:08,202 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-03 21:45:08,202 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-03 21:45:08,207 INFO common.Storage (FSImage.java:format(1339)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-03 21:45:08,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-03 21:45:08,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-03 21:45:08,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-03 21:45:08,209 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-03 21:45:08,209 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-03 21:45:08,209 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-03 21:45:08,210 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-03 21:45:08,210 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-03 21:45:08,210 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-03 21:45:08,216 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-03 21:45:08,216 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-03 21:45:08,217 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-03 21:45:08,217 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-03 21:45:08,218 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-03 21:45:08,219 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-03 21:45:08,219 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-03 21:45:08,220 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-03 21:45:08,220 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-03 21:45:08,222 INFO common.Storage (FSImageFormat.java:load(171)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-03 21:45:08,222 INFO common.Storage (FSImageFormat.java:load(174)) - Number of files = 1
[junit] 2010-12-03 21:45:08,223 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(342)) - Number of files under construction = 0
[junit] 2010-12-03 21:45:08,223 INFO common.Storage (FSImageFormat.java:load(195)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-03 21:45:08,224 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-03 21:45:08,267 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-03 21:45:08,268 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 61 msecs
[junit] 2010-12-03 21:45:08,268 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-03 21:45:08,269 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvxq(363)) - Running __CLR3_0_2q30srsvxq
[junit] 2010-12-03 21:45:08,271 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-03 21:45:08,271 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-03 21:45:08,271 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-03 21:45:08,278 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 18082301, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.607 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuon3(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1honl(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcboo4(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513oon(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpop0(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxopc(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 477 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/477/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4113 lines...]
[junit] 2010-12-02 03:06:11,229 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-02 03:06:11,239 INFO common.Storage (FSImageFormat.java:write(444)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-02 03:06:11,242 INFO common.Storage (FSImageFormat.java:write(468)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-02 03:06:11,243 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-02 03:06:11,243 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-02 03:06:11,251 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-02 03:06:11,251 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-02 03:06:11,252 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-02 03:06:11,252 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-02 03:06:11,252 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-02 03:06:11,253 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-02 03:06:11,253 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-02 03:06:11,253 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-02 03:06:11,253 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-02 03:06:11,254 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-02 03:06:11,261 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-02 03:06:11,261 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-02 03:06:11,262 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-02 03:06:11,262 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-02 03:06:11,263 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-02 03:06:11,263 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-02 03:06:11,264 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-02 03:06:11,264 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-02 03:06:11,265 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-02 03:06:11,266 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-02 03:06:11,267 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-12-02 03:06:11,267 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(311)) - Number of files under construction = 0
[junit] 2010-12-02 03:06:11,267 INFO common.Storage (FSImageFormat.java:load(286)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-02 03:06:11,268 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-02 03:06:11,318 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-02 03:06:11,319 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 68 msecs
[junit] 2010-12-02 03:06:11,319 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-02 03:06:11,320 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvx1(363)) - Running __CLR3_0_2q30srsvx1
[junit] 2010-12-02 03:06:11,321 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-02 03:06:11,322 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-02 03:06:11,322 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-02 03:06:11,328 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 31346136, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.445 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 39 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Error updating JIRA issues. Saving issues for next build.
com.atlassian.jira.rpc.exception.RemotePermissionException: This issue does not exist or you don't have permission to view it.
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuome(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homw(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonf(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513ony(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoob(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxoon(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 476 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/476/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-12-01 22:25:38,274 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-01 22:25:38,274 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-01 22:25:38,275 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-01 22:25:38,283 INFO common.Storage (FSImageFormat.java:write(444)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-01 22:25:38,286 INFO common.Storage (FSImageFormat.java:write(468)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-01 22:25:38,286 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-01 22:25:38,287 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-01 22:25:38,295 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-01 22:25:38,295 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-01 22:25:38,296 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-01 22:25:38,297 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-01 22:25:38,297 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-01 22:25:38,297 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-01 22:25:38,312 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-01 22:25:38,313 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-01 22:25:38,313 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-01 22:25:38,313 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-01 22:25:38,314 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-01 22:25:38,314 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-01 22:25:38,315 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-01 22:25:38,315 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-01 22:25:38,316 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(311)) - Number of files under construction = 0
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:load(286)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-01 22:25:38,318 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-01 22:25:38,318 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-01 22:25:38,319 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 24 msecs
[junit] 2010-12-01 22:25:38,319 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-01 22:25:38,320 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvx1(363)) - Running __CLR3_0_2q30srsvx1
[junit] 2010-12-01 22:25:38,321 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-01 22:25:38,321 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-01 22:25:38,321 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-01 22:25:38,327 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 15580729, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.434 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 57 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuome(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homw(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonf(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513ony(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoob(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxoon(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 475 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/475/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4112 lines...]
[junit] 2010-11-30 06:24:29,034 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 06:24:29,035 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 06:24:29,035 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 06:24:29,044 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 06:24:29,047 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-30 06:24:29,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-30 06:24:29,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-30 06:24:29,055 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-30 06:24:29,056 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-30 06:24:29,056 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-30 06:24:29,056 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-30 06:24:29,057 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-30 06:24:29,057 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-30 06:24:29,057 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-30 06:24:29,057 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-30 06:24:29,058 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-30 06:24:29,058 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-30 06:24:29,061 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-30 06:24:29,062 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-30 06:24:29,062 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 06:24:29,063 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 06:24:29,063 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-30 06:24:29,064 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-30 06:24:29,064 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 06:24:29,065 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 06:24:29,065 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 06:24:29,066 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 06:24:29,067 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-30 06:24:29,067 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-30 06:24:29,067 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-30 06:24:29,068 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-30 06:24:29,069 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-30 06:24:29,069 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-30 06:24:29,069 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-30 06:24:29,070 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-30 06:24:29,071 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-30 06:24:29,072 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-30 06:24:29,072 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-30 06:24:29,079 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 4148925, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.517 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 5 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 474 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/474/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-11-30 05:58:01,189 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 05:58:01,189 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 05:58:01,189 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 05:58:01,197 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 05:58:01,200 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-30 05:58:01,201 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-30 05:58:01,201 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-30 05:58:01,207 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-30 05:58:01,209 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-30 05:58:01,209 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-30 05:58:01,209 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-30 05:58:01,209 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-30 05:58:01,210 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-30 05:58:01,213 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-30 05:58:01,214 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-30 05:58:01,214 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 05:58:01,214 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 05:58:01,215 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-30 05:58:01,215 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-30 05:58:01,216 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 05:58:01,216 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 05:58:01,217 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 05:58:01,218 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 05:58:01,219 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-30 05:58:01,219 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-30 05:58:01,219 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-30 05:58:01,220 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-30 05:58:01,220 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-30 05:58:01,221 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-30 05:58:01,221 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-30 05:58:01,222 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-30 05:58:01,223 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-30 05:58:01,223 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-30 05:58:01,224 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-30 05:58:01,231 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 23525817, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.414 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 5 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 473 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/473/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-11-29 07:37:02,860 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 07:37:02,860 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 07:37:02,861 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 07:37:02,868 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 07:37:02,871 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-29 07:37:02,872 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-29 07:37:02,872 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-29 07:37:02,877 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-29 07:37:02,879 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-29 07:37:02,883 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-29 07:37:02,883 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-29 07:37:02,884 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 07:37:02,884 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 07:37:02,884 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-29 07:37:02,885 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-29 07:37:02,885 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 07:37:02,886 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 07:37:02,886 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 07:37:02,887 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 07:37:02,887 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-29 07:37:02,888 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-29 07:37:02,888 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-29 07:37:02,888 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-29 07:37:02,889 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-29 07:37:02,889 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 12 msecs
[junit] 2010-11-29 07:37:02,890 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-29 07:37:02,890 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-29 07:37:02,891 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-29 07:37:02,892 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-29 07:37:02,892 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-29 07:37:02,898 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 9975050, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.625 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 472 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/472/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-11-29 02:56:42,930 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 02:56:42,931 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 02:56:42,931 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 02:56:42,940 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 02:56:42,943 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-29 02:56:42,944 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-29 02:56:42,944 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-29 02:56:42,949 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-29 02:56:42,950 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-29 02:56:42,950 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-29 02:56:42,951 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-29 02:56:42,951 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-29 02:56:42,951 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-29 02:56:42,951 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-29 02:56:42,952 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-29 02:56:42,952 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-29 02:56:42,952 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-29 02:56:42,956 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-29 02:56:42,956 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-29 02:56:42,957 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 02:56:42,957 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 02:56:42,958 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-29 02:56:42,958 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-29 02:56:42,959 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 02:56:42,959 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 02:56:42,960 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 02:56:42,961 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 02:56:42,961 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-29 02:56:42,961 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-29 02:56:42,962 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-29 02:56:42,962 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-29 02:56:42,963 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-29 02:56:42,963 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 13 msecs
[junit] 2010-11-29 02:56:42,963 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-29 02:56:42,964 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-29 02:56:42,966 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-29 02:56:42,966 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-29 02:56:42,966 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-29 02:56:42,973 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 9975050, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.557 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 2 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 471 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/471/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1219 lines...]
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] byte [] name = FSImageSerialization.readBytes(in);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:249: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] v.visit(ImageElement.CLIENT_NAME, FSImageSerialization.readString(in));
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:250: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] v.visit(ImageElement.CLIENT_MACHINE, FSImageSerialization.readString(in));
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:261: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] FSImageSerialization.readString(in);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:262: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] FSImageSerialization.readString(in);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:340: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] v.visit(ImageElement.INODE_PATH, FSImageSerialization.readString(in));
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 39 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:335: Compile failed; see the compiler error output for details.
Total time: 41 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.