You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2010/11/23 03:05:55 UTC
Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #465
See <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/465/changes>
Changes:
[eli] HDFS-1513. Fix a number of warnings. Contributed by Eli Collins
------------------------------------------
[...truncated 3893 lines...]
[junit] 2010-11-23 02:05:30,340 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,340 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,344 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,344 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,345 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,345 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,346 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,346 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 02:05:30,348 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,348 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,348 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,350 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 02:05:30,350 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 02:05:30,350 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,351 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,398 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 02:05:30,399 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 61 msecs
[junit] 2010-11-23 02:05:30,399 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 02:05:30,400 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2cpnfswvqb(253)) - Running __CLR3_0_2cpnfswvqb
[junit] 2010-11-23 02:05:30,401 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 02:05:30,401 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 02:05:30,402 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 02:05:30,408 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 20245380, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 02:05:30,425 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 02:05:30,425 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,426 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,426 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,427 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,427 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,428 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,428 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,428 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,428 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,429 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,429 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,429 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,433 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,433 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,433 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,434 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,434 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,438 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 02:05:30,438 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 02:05:30,439 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 02:05:30,444 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 02:05:30,445 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,445 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,446 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,446 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,446 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,446 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,447 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,447 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,447 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,450 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,451 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,451 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,452 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,452 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,453 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 02:05:30,454 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,455 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,455 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,456 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 02:05:30,457 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 02:05:30,457 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,457 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,458 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 02:05:30,458 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-23 02:05:30,459 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 02:05:30,459 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2xseoacvqq(279)) - Running __CLR3_0_2xseoacvqq
[junit] 2010-11-23 02:05:30,460 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 02:05:30,461 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 02:05:30,461 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 02:05:30,469 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 27970911, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 02:05:30,484 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 02:05:30,485 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,485 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,486 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,486 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,487 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,487 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,487 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,487 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,488 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,488 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,488 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,488 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,492 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,493 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,493 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,493 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,494 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,497 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 02:05:30,499 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 02:05:30,499 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 02:05:30,505 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 02:05:30,506 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,506 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,507 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,507 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,507 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,507 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,508 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,508 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,508 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,520 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,521 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,521 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,522 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,522 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,523 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 02:05:30,524 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,525 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,525 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,526 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 02:05:30,526 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 02:05:30,527 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,527 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,528 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 02:05:30,528 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 23 msecs
[junit] 2010-11-23 02:05:30,528 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 02:05:30,529 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2wnrgefvr6(307)) - Running __CLR3_0_2wnrgefvr6
[junit] 2010-11-23 02:05:30,530 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 02:05:30,530 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 02:05:30,531 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 02:05:30,537 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 12893236, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 02:05:30,552 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 02:05:30,553 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,553 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,554 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,555 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,555 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,555 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,555 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,556 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,556 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,556 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,556 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,557 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,560 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,560 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,561 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,561 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,561 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,565 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 02:05:30,566 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 02:05:30,566 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 02:05:30,571 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 02:05:30,572 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,572 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,572 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,573 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,573 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,573 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,573 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,574 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,574 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,577 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,578 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,578 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,578 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,579 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,579 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 02:05:30,581 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,581 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,582 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,583 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 02:05:30,583 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 02:05:30,583 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,584 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,584 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 02:05:30,585 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-23 02:05:30,585 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 02:05:30,586 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2f0vsivvrm(335)) - Running __CLR3_0_2f0vsivvrm
[junit] 2010-11-23 02:05:30,587 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 02:05:30,587 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 02:05:30,587 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 02:05:30,593 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 10202458, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 02:05:30,608 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 02:05:30,609 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,609 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,610 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,611 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,611 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,611 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,611 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,612 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,612 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,612 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,612 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,613 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,627 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,627 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,628 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,628 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,628 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,632 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 02:05:30,632 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 02:05:30,633 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 02:05:30,638 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 02:05:30,638 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 02:05:30,639 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 02:05:30,639 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 02:05:30,639 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 02:05:30,639 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 02:05:30,640 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 02:05:30,640 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 02:05:30,640 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 02:05:30,640 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 02:05:30,644 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 02:05:30,644 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 02:05:30,645 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 02:05:30,645 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 02:05:30,646 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 02:05:30,646 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 02:05:30,648 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 02:05:30,648 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,648 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 02:05:30,650 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 02:05:30,650 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 02:05:30,650 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,650 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 02:05:30,651 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 02:05:30,651 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 13 msecs
[junit] 2010-11-23 02:05:30,652 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 02:05:30,652 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvs2(363)) - Running __CLR3_0_2q30srsvs2
[junit] 2010-11-23 02:05:30,654 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 02:05:30,654 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 02:05:30,654 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 02:05:30,660 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 26089635, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.396 sec
checkfailure:
[touch] Creating <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/testsfailed>
BUILD FAILED
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:675: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:638: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:706: Tests failed!
Total time: 1 minute 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Hadoop-Hdfs-trunk-Commit - Build # 507 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/507/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1369 lines...]
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] [INFO] Uploading project information for hadoop-hdfs 0.23.0-20110106.180913-43
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error installing artifact's metadata: Error while deploying metadata: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.180913-43.pom. Return code is: 502
Total time: 69 minutes 24 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 506 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/506/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1367 lines...]
-compile-test-system.wrapper:
[mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
[javac] Compiling 1 source file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/test/classes
jar-test-system:
-do-jar-test:
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT.jar
[jar] Building jar: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build-fi/system/hadoop-hdfs-instrumented-test-0.23.0-SNAPSHOT-sources.jar
set-version:
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
[copy] Copying 1 file to /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy
clean-sign:
sign:
signanddeploy:
simpledeploy:
[artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https
[artifact:deploy] Uploading: org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar to apache.snapshots.https
[artifact:deploy] Uploaded 1018K
[artifact:deploy] An error has occurred while processing the Maven artifact tasks.
[artifact:deploy] Diagnosis:
[artifact:deploy]
[artifact:deploy] Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1652: Error deploying artifact 'org.apache.hadoop:hadoop-hdfs:jar': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.23.0-SNAPSHOT/hadoop-hdfs-0.23.0-20110106.074133-43.jar. Return code is: 502
Total time: 31 minutes 40 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 505 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/505/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137035 lines...]
[junit] 2010-12-27 23:13:48,122 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,123 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 56721: exiting
[junit] 2010-12-27 23:13:48,225 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 23:13:48,225 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 23:13:48,225 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 56721
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 23:13:48,227 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:50678, storageID=DS-238565002-127.0.1.1-50678-1293491627053, infoPort=38684, ipcPort=56721):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 23:13:48,227 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 56721
[junit] 2010-12-27 23:13:48,228 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 23:13:48,228 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 23:13:48,228 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 23:13:48,330 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,330 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 8
[junit] 2010-12-27 23:13:48,330 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54561
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 54561: exiting
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 23:13:48,332 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 54561: exiting
[junit] 2010-12-27 23:13:48,333 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 54561: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.245 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 5ab6639edef23cfa16bd6c2d300b3919 but expecting 97807cc92a7b57f44e8c137f9f3ac20c
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4y(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 504 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/504/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144338 lines...]
[junit] 2010-12-27 22:24:13,132 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-27 22:24:13,245 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47521: exiting
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,246 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47521
[junit] 2010-12-27 22:24:13,246 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47521: exiting
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,248 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33402, storageID=DS-1192234358-127.0.1.1-33402-1293488652220, infoPort=53019, ipcPort=47521):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-27 22:24:13,249 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47521
[junit] 2010-12-27 22:24:13,249 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-27 22:24:13,249 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-27 22:24:13,250 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-27 22:24:13,362 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,362 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 7
[junit] 2010-12-27 22:24:13,362 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 53495
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 53495: exiting
[junit] 2010-12-27 22:24:13,364 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 53495: exiting
[junit] 2010-12-27 22:24:13,365 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 53495: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.882 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 57bf7fd73d0975fca627e019645026f7 but expecting 4cac6dcf588c1cd6a8fe37ac06fe93e8
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 503 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/503/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139187 lines...]
[junit] 2010-12-26 05:36:20,712 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-26 05:36:20,814 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 49021: exiting
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 49021
[junit] 2010-12-26 05:36:20,815 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 49021: exiting
[junit] 2010-12-26 05:36:20,816 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,816 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-26 05:36:20,816 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-26 05:36:20,818 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:55257, storageID=DS-754055407-127.0.1.1-55257-1293341779763, infoPort=35514, ipcPort=49021):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-26 05:36:20,819 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 49021
[junit] 2010-12-26 05:36:20,819 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-26 05:36:20,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-26 05:36:20,820 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-26 05:36:20,922 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,922 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 13 3
[junit] 2010-12-26 05:36:20,922 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-26 05:36:20,923 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44058
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 44058: exiting
[junit] 2010-12-26 05:36:20,924 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 44058: exiting
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44058
[junit] 2010-12-26 05:36:20,925 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-26 05:36:20,926 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 44058: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.034 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:691: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:648: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:716: Tests failed!
Total time: 8 minutes 49 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 580e778f2959ad8f290271e867122576 but expecting f8e9b33f6f17300263dc9231e1296725
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 502 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/502/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete file /homes/hudson/.ivy2/cache/org.apache.hadoop/avro/jars/.nfs00000000054240250000002b
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 501 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/501/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 976 lines...]
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1251: Unable to delete directory /homes/hudson/.ivy2/cache/org.apache.hadoop
Total time: 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 500 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/500/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 148064 lines...]
[junit] 2010-12-22 04:48:21,922 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-22 04:48:21,922 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-22 04:48:22,025 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37929
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:41067, storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, ipcPort=37929):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 37929: exiting
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37929
[junit] 2010-12-22 04:48:22,026 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-22 04:48:22,026 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 37929: exiting
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:41067, storageID=DS-911990291-127.0.1.1-41067-1292993301009, infoPort=33105, ipcPort=37929):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-22 04:48:22,027 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37929
[junit] 2010-12-22 04:48:22,027 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-22 04:48:22,028 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-22 04:48:22,028 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-22 04:48:22,028 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-22 04:48:22,130 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,130 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 2Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 6 7
[junit] 2010-12-22 04:48:22,130 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-22 04:48:22,131 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 40184
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 40184
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 40184: exiting
[junit] 2010-12-22 04:48:22,133 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 40184: exiting
[junit] 2010-12-22 04:48:22,132 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 40184: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.862 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of bd3f6219eddb98077bc18e6c12861710 but expecting 88d060e5de3c40f6ddcf4149957f9ec2
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of bd3f6219eddb98077bc18e6c12861710 but expecting 88d060e5de3c40f6ddcf4149957f9ec2
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 499 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/499/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1473 lines...]
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java:33: package InterfaceStability does not exist
[javac] @InterfaceStability.Evolving
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:147: cannot find symbol
[javac] symbol : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] HdfsLocatedFileStatus f, Path parent) {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:146: cannot find symbol
[javac] symbol : class LocatedFileStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] private LocatedFileStatus makeQualifiedLocated(
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:159: cannot find symbol
[javac] symbol : class FsStatus
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public FsStatus getFsStatus() throws IOException {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:164: cannot find symbol
[javac] symbol : class FsServerDefaults
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public FsServerDefaults getServerDefaults() throws IOException {
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/fs/Hdfs.java:170: cannot find symbol
[javac] symbol : class Path
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] final Path p)
[javac] ^
[javac] Note: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 100 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:335: Compile failed; see the compiler error output for details.
Total time: 13 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 498 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/498/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 145407 lines...]
[junit] 2010-12-21 21:02:49,548 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47329
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47329
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47329: exiting
[junit] 2010-12-21 21:02:49,658 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 21:02:49,659 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:45847, storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, ipcPort=47329):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 21:02:49,659 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47329: exiting
[junit] 2010-12-21 21:02:49,661 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 21:02:49,662 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 21:02:49,662 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:45847, storageID=DS-1587999582-127.0.1.1-45847-1292965368602, infoPort=45437, ipcPort=47329):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 21:02:49,662 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47329
[junit] 2010-12-21 21:02:49,663 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 21:02:49,663 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 21:02:49,663 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 21:02:49,663 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 21:02:49,765 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 21:02:49,765 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 5
[junit] 2010-12-21 21:02:49,766 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33168
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33168
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33168: exiting
[junit] 2010-12-21 21:02:49,768 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 33168: exiting
[junit] 2010-12-21 21:02:49,767 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 33168: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.926 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 8 minutes 50 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 3b4f6a8cfcd648671359d20cf5fede04 but expecting 42a5be8dc7187ce3f5af8891edc38f6e
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 3b4f6a8cfcd648671359d20cf5fede04 but expecting 42a5be8dc7187ce3f5af8891edc38f6e
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 497 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/497/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139268 lines...]
[junit] 2010-12-21 19:25:21,468 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:25:21,468 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:25:21,569 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38446
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38446: exiting
[junit] 2010-12-21 19:25:21,570 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38446
[junit] 2010-12-21 19:25:21,571 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,571 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:36691, storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, ipcPort=38446):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 19:25:21,571 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:36691, storageID=DS-1364120862-127.0.1.1-36691-1292959520595, infoPort=59825, ipcPort=38446):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:25:21,572 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38446
[junit] 2010-12-21 19:25:21,572 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:25:21,572 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 19:25:21,573 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 19:25:21,573 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:25:21,674 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,674 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:25:21,675 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 3
[junit] 2010-12-21 19:25:21,676 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 46353
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 46353
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 46353: exiting
[junit] 2010-12-21 19:25:21,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 46353: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.667 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 18 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION: org.apache.hadoop.hdfs.TestHDFSTrash.testTrashEmptier
Error Message:
null
Stack Trace:
junit.framework.AssertionFailedError: null
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:473)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 62d7ef8ef25cbf3973a3306d4f41adf9 but expecting 2fdedf97f4e6044ee113c2dd30ecb287
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 62d7ef8ef25cbf3973a3306d4f41adf9 but expecting 2fdedf97f4e6044ee113c2dd30ecb287
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 496 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/496/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 143613 lines...]
[junit] 2010-12-21 19:03:38,419 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:03:38,419 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 19:03:38,520 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38397
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38397
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38397: exiting
[junit] 2010-12-21 19:03:38,521 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:34688, storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, ipcPort=38397):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 19:03:38,521 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 19:03:38,521 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38397: exiting
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:34688, storageID=DS-629189686-127.0.1.1-34688-1292958217657, infoPort=58302, ipcPort=38397):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 19:03:38,522 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38397
[junit] 2010-12-21 19:03:38,522 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 19:03:38,523 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 19:03:38,523 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 19:03:38,523 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 19:03:38,625 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 19:03:38,625 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 10 6
[junit] 2010-12-21 19:03:38,626 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 42152
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 42152
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 42152: exiting
[junit] 2010-12-21 19:03:38,627 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 42152: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.661 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of b56342fecc1aec17456b9ee1c243d2cd but expecting 2a921b0b2ea27a3eb547553c8a134cb7
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of b56342fecc1aec17456b9ee1c243d2cd but expecting 2a921b0b2ea27a3eb547553c8a134cb7
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 495 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/495/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 140929 lines...]
[junit] 2010-12-21 00:45:53,108 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 41550
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 41550: exiting
[junit] 2010-12-21 00:45:53,209 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 41550
[junit] 2010-12-21 00:45:53,209 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 41550: exiting
[junit] 2010-12-21 00:45:53,210 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 41550: exiting
[junit] 2010-12-21 00:45:53,209 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43800, storageID=DS-968232197-127.0.1.1-43800-1292892352218, infoPort=46970, ipcPort=41550):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 00:45:53,212 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:45:53,212 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 00:45:53,213 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:43800, storageID=DS-968232197-127.0.1.1-43800-1292892352218, infoPort=46970, ipcPort=41550):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 00:45:53,213 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 41550
[junit] 2010-12-21 00:45:53,213 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:45:53,213 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 00:45:53,213 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 00:45:53,213 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:45:53,315 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:45:53,315 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:45:53,315 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 5
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48472
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48472: exiting
[junit] 2010-12-21 00:45:53,317 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48472
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 48472: exiting
[junit] 2010-12-21 00:45:53,319 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 48472: exiting
[junit] 2010-12-21 00:45:53,318 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 48472: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.837 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 12 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of f483a6a05343f3a5a905194d83379a3a but expecting 47b87cf59325e8317530bcfc0a6ffd58
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of f483a6a05343f3a5a905194d83379a3a but expecting 47b87cf59325e8317530bcfc0a6ffd58
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 494 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/494/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 137288 lines...]
[junit] 2010-12-21 00:33:20,495 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:33:20,495 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-21 00:33:20,610 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38250
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 38250: exiting
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 38250: exiting
[junit] 2010-12-21 00:33:20,611 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:52644, storageID=DS-596423927-127.0.1.1-52644-1292891599452, infoPort=41489, ipcPort=38250):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:33:20,611 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 38250: exiting
[junit] 2010-12-21 00:33:20,612 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:33:20,612 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38250
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:52644, storageID=DS-596423927-127.0.1.1-52644-1292891599452, infoPort=41489, ipcPort=38250):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-21 00:33:20,613 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 38250
[junit] 2010-12-21 00:33:20,613 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-21 00:33:20,614 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-21 00:33:20,614 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-21 00:33:20,614 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-21 00:33:20,716 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:33:20,716 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-21 00:33:20,716 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 6 7
[junit] 2010-12-21 00:33:20,717 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47473
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 47473: exiting
[junit] 2010-12-21 00:33:20,719 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 47473: exiting
[junit] 2010-12-21 00:33:20,719 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47473: exiting
[junit] 2010-12-21 00:33:20,718 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47473
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.053 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9e4db606ea68a788476061e245b48fb7 but expecting 495b22a3bd50d4a318a6950277ae3411
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 9e4db606ea68a788476061e245b48fb7 but expecting 495b22a3bd50d4a318a6950277ae3411
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 493 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/493/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 140756 lines...]
[junit] 2010-12-20 15:03:59,265 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-20 15:03:59,266 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-20 15:03:59,367 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52203
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 52203: exiting
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 52203
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-20 15:03:59,368 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:33187, storageID=DS-1110103111-127.0.1.1-33187-1292857438383, infoPort=58795, ipcPort=52203):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-20 15:03:59,368 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 52203: exiting
[junit] 2010-12-20 15:03:59,368 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 52203: exiting
[junit] 2010-12-20 15:03:59,369 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-20 15:03:59,370 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:33187, storageID=DS-1110103111-127.0.1.1-33187-1292857438383, infoPort=58795, ipcPort=52203):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-20 15:03:59,370 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52203
[junit] 2010-12-20 15:03:59,370 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-20 15:03:59,370 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-20 15:03:59,370 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-20 15:03:59,371 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-20 15:03:59,472 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-20 15:03:59,472 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-20 15:03:59,473 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 6
[junit] 2010-12-20 15:03:59,474 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 55985
[junit] 2010-12-20 15:03:59,474 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 55985
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 55985: exiting
[junit] 2010-12-20 15:03:59,475 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 55985: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.937 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 20 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 710b09658f742cca03ff90d01ec82cc8 but expecting 332c0560c0bb8cc85333ebada5808ba0
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 710b09658f742cca03ff90d01ec82cc8 but expecting 332c0560c0bb8cc85333ebada5808ba0
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 492 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/492/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139404 lines...]
[junit] 2010-12-16 20:03:43,855 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-16 20:03:43,856 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43008
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 43008: exiting
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 43008
[junit] 2010-12-16 20:03:43,857 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 20:03:43,857 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-16 20:03:43,857 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:58718, storageID=DS-586157729-127.0.1.1-58718-1292529823018, infoPort=51424, ipcPort=43008):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-16 20:03:43,860 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:58718, storageID=DS-586157729-127.0.1.1-58718-1292529823018, infoPort=51424, ipcPort=43008):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-16 20:03:43,861 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43008
[junit] 2010-12-16 20:03:43,861 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 20:03:43,861 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-16 20:03:43,861 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-16 20:03:43,861 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 20:03:43,963 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 20:03:43,963 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 20:03:43,963 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 8
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 51660
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 51660
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 51660: exiting
[junit] 2010-12-16 20:03:43,966 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 51660: exiting
[junit] 2010-12-16 20:03:43,965 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 51660: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.697 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 23543fd946cc0d08fa477d0015383abe but expecting 9849b11c80ca88a516a83ff9e1216a39
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 23543fd946cc0d08fa477d0015383abe but expecting 9849b11c80ca88a516a83ff9e1216a39
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 491 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/491/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 133376 lines...]
[junit] 2010-12-16 19:43:24,559 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 19:43:24,559 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-16 19:43:24,561 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33691
[junit] 2010-12-16 19:43:24,561 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33691: exiting
[junit] 2010-12-16 19:43:24,562 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33691: exiting
[junit] 2010-12-16 19:43:24,562 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33691: exiting
[junit] 2010-12-16 19:43:24,562 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33691
[junit] 2010-12-16 19:43:24,563 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 19:43:24,563 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:46255, storageID=DS-234606803-127.0.1.1-46255-1292528603674, infoPort=52737, ipcPort=33691):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-16 19:43:24,563 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 19:43:24,564 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-16 19:43:24,564 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:46255, storageID=DS-234606803-127.0.1.1-46255-1292528603674, infoPort=52737, ipcPort=33691):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-16 19:43:24,564 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33691
[junit] 2010-12-16 19:43:24,564 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-16 19:43:24,565 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-16 19:43:24,565 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-16 19:43:24,565 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-16 19:43:24,675 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 19:43:24,675 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-16 19:43:24,675 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 4
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 37453
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 37453
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 37453: exiting
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-16 19:43:24,677 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 37453: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.865 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 85d140eb152f07b333c271179251970d but expecting b6b6a7d89be0bc1cf946106cc78eacfe
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 85d140eb152f07b333c271179251970d but expecting b6b6a7d89be0bc1cf946106cc78eacfe
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 490 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/490/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 136703 lines...]
[junit] 2010-12-14 22:05:40,449 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 22:05:40,450 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54146
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54146: exiting
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54146
[junit] 2010-12-14 22:05:40,551 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:39160, storageID=DS-1617480765-127.0.1.1-39160-1292364339594, infoPort=45645, ipcPort=54146):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-14 22:05:40,551 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 22:05:40,551 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54146: exiting
[junit] 2010-12-14 22:05:40,552 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-14 22:05:40,552 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54146: exiting
[junit] 2010-12-14 22:05:40,553 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:39160, storageID=DS-1617480765-127.0.1.1-39160-1292364339594, infoPort=45645, ipcPort=54146):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-14 22:05:40,553 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54146
[junit] 2010-12-14 22:05:40,553 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 22:05:40,553 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-14 22:05:40,553 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-14 22:05:40,553 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 22:05:40,655 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 22:05:40,655 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 22:05:40,655 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 5
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54602
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 54602: exiting
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 54602: exiting
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54602
[junit] 2010-12-14 22:05:40,657 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 54602: exiting
[junit] 2010-12-14 22:05:40,658 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 54602: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.818 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 1 second
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 993793371432de51679ded3aeccab03d but expecting d89f442914d49bd27045e22c447a5ffa
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 993793371432de51679ded3aeccab03d but expecting d89f442914d49bd27045e22c447a5ffa
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4o(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 489 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/489/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 143003 lines...]
[junit] 2010-12-14 21:53:27,287 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(786)) - Shutting down DataNode 0
[junit] 2010-12-14 21:53:27,388 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48952
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48952
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48952: exiting
[junit] 2010-12-14 21:53:27,389 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48952: exiting
[junit] 2010-12-14 21:53:27,389 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-14 21:53:27,390 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48952: exiting
[junit] 2010-12-14 21:53:27,390 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:41759, storageID=DS-1989182890-127.0.1.1-41759-1292363606407, infoPort=45129, ipcPort=48952):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-14 21:53:27,392 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 21:53:27,392 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-14 21:53:27,393 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:41759, storageID=DS-1989182890-127.0.1.1-41759-1292363606407, infoPort=45129, ipcPort=48952):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-14 21:53:27,393 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48952
[junit] 2010-12-14 21:53:27,393 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 21:53:27,393 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-14 21:53:27,393 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-14 21:53:27,393 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 21:53:27,495 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 21:53:27,495 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 21:53:27,495 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 7
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 45548
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 45548
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 45548: exiting
[junit] 2010-12-14 21:53:27,497 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 45548: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.954 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 29 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 930091b46f62ba27b7bc0981530ac4d3 but expecting 4ed14e598349d48a2f4800088babbc6e
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 930091b46f62ba27b7bc0981530ac4d3 but expecting 4ed14e598349d48a2f4800088babbc6e
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r4l(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 488 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/488/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 146286 lines...]
[junit] 2010-12-14 18:03:05,707 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 18:03:05,707 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44181
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44181: exiting
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44181: exiting
[junit] 2010-12-14 18:03:05,809 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:45160, storageID=DS-1161769845-127.0.1.1-45160-1292349784850, infoPort=45198, ipcPort=44181):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44181
[junit] 2010-12-14 18:03:05,809 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44181: exiting
[junit] 2010-12-14 18:03:05,809 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 18:03:05,810 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-14 18:03:05,811 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:45160, storageID=DS-1161769845-127.0.1.1-45160-1292349784850, infoPort=45198, ipcPort=44181):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-14 18:03:05,811 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44181
[junit] 2010-12-14 18:03:05,811 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-14 18:03:05,811 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-14 18:03:05,811 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-14 18:03:05,812 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-14 18:03:05,913 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 18:03:05,913 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-14 18:03:05,914 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 11 5
[junit] 2010-12-14 18:03:05,915 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 39821
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 39821
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 39821: exiting
[junit] 2010-12-14 18:03:05,916 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 39821: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.814 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 5 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d80140c1c42e305c7e922044d12cc8c3 but expecting f0bf403db9f3d6c1a9f694599b49f015
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d80140c1c42e305c7e922044d12cc8c3 but expecting f0bf403db9f3d6c1a9f694599b49f015
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r3q(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 487 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/487/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144861 lines...]
[junit] 2010-12-10 07:25:41,214 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-10 07:25:41,214 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33651
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 33651: exiting
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 33651
[junit] 2010-12-10 07:25:41,316 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:43287, storageID=DS-1896691708-127.0.1.1-43287-1291965940384, infoPort=50373, ipcPort=33651):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 33651: exiting
[junit] 2010-12-10 07:25:41,316 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 33651: exiting
[junit] 2010-12-10 07:25:41,317 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-10 07:25:41,317 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-10 07:25:41,317 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:43287, storageID=DS-1896691708-127.0.1.1-43287-1291965940384, infoPort=50373, ipcPort=33651):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-10 07:25:41,318 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 33651
[junit] 2010-12-10 07:25:41,318 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-10 07:25:41,318 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-10 07:25:41,318 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-10 07:25:41,318 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-10 07:25:41,420 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-10 07:25:41,420 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 7
[junit] 2010-12-10 07:25:41,420 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 60200
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 60200
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 60200: exiting
[junit] 2010-12-10 07:25:41,423 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 60200: exiting
[junit] 2010-12-10 07:25:41,423 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 60200: exiting
[junit] 2010-12-10 07:25:41,422 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 60200: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.762 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 11 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d16064892c28373e3f7f112c07e982dc but expecting 779cad55065df8db44670bf8766cf5f9
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d16064892c28373e3f7f112c07e982dc but expecting 779cad55065df8db44670bf8766cf5f9
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r3u(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 486 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/486/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 145123 lines...]
[junit] 2010-12-09 23:53:14,616 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-09 23:53:14,717 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48456
[junit] 2010-12-09 23:53:14,717 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 48456: exiting
[junit] 2010-12-09 23:53:14,717 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 48456: exiting
[junit] 2010-12-09 23:53:14,718 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 48456: exiting
[junit] 2010-12-09 23:53:14,718 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 48456
[junit] 2010-12-09 23:53:14,719 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 23:53:14,719 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-09 23:53:14,719 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:40062, storageID=DS-17273153-127.0.1.1-40062-1291938793750, infoPort=44013, ipcPort=48456):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-09 23:53:14,721 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 23:53:14,721 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-09 23:53:14,722 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:40062, storageID=DS-17273153-127.0.1.1-40062-1291938793750, infoPort=44013, ipcPort=48456):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-09 23:53:14,722 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 48456
[junit] 2010-12-09 23:53:14,722 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 23:53:14,722 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-09 23:53:14,722 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-09 23:53:14,723 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-09 23:53:14,824 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 23:53:14,824 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 23:53:14,825 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 3
[junit] 2010-12-09 23:53:14,826 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 57180
[junit] 2010-12-09 23:53:14,826 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 57180
[junit] 2010-12-09 23:53:14,826 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 57180: exiting
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 23:53:14,827 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 57180: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.842 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 9 minutes 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 6d9de0703efa444f45d324aee7f5f7ba but expecting 89541f158bf0bb5aca2c5f3657263d95
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 6d9de0703efa444f45d324aee7f5f7ba but expecting 89541f158bf0bb5aca2c5f3657263d95
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r3w(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 485 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/485/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 144785 lines...]
[junit] 2010-12-09 19:43:21,826 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43071
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 43071: exiting
[junit] 2010-12-09 19:43:21,927 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 43071: exiting
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 43071
[junit] 2010-12-09 19:43:21,928 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 43071: exiting
[junit] 2010-12-09 19:43:21,927 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 19:43:21,928 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:39312, storageID=DS-1841765039-127.0.1.1-39312-1291923800916, infoPort=45410, ipcPort=43071):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-09 19:43:21,930 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 19:43:21,930 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-09 19:43:21,931 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:39312, storageID=DS-1841765039-127.0.1.1-39312-1291923800916, infoPort=45410, ipcPort=43071):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-09 19:43:21,931 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 43071
[junit] 2010-12-09 19:43:21,931 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-09 19:43:21,931 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-09 19:43:21,931 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-09 19:43:21,931 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-09 19:43:21,933 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 19:43:21,933 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 5
[junit] 2010-12-09 19:43:21,933 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 34916
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 34916
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 34916: exiting
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 34916: exiting
[junit] 2010-12-09 19:43:21,936 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-09 19:43:21,935 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 34916: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.572 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 8 minutes 56 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 33107cba943a561d1044566e0043c67e but expecting f77fbcfef771fee5aaf6ee76d1257847
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 33107cba943a561d1044566e0043c67e but expecting f77fbcfef771fee5aaf6ee76d1257847
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1p(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 484 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/484/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 139707 lines...]
[junit] 2010-12-08 07:30:15,926 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-08 07:30:15,926 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-08 07:30:16,028 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52270
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 52270: exiting
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 52270: exiting
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 52270: exiting
[junit] 2010-12-08 07:30:16,029 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:37921, storageID=DS-866126237-127.0.1.1-37921-1291793415028, infoPort=42726, ipcPort=52270):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-08 07:30:16,029 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 07:30:16,030 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-08 07:30:16,029 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 52270
[junit] 2010-12-08 07:30:16,030 INFO datanode.DataNode (DataNode.java:run(1445)) - DatanodeRegistration(127.0.0.1:37921, storageID=DS-866126237-127.0.1.1-37921-1291793415028, infoPort=42726, ipcPort=52270):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-08 07:30:16,030 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 52270
[junit] 2010-12-08 07:30:16,031 INFO datanode.DataNode (DataNode.java:shutdown(771)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-08 07:30:16,031 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-08 07:30:16,031 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-08 07:30:16,031 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-08 07:30:16,133 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 07:30:16,133 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 07:30:16,134 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 3Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 9 8
[junit] 2010-12-08 07:30:16,135 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 45830
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 45830: exiting
[junit] 2010-12-08 07:30:16,137 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 45830: exiting
[junit] 2010-12-08 07:30:16,137 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 45830
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 45830: exiting
[junit] 2010-12-08 07:30:16,136 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 45830: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 14.975 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 46 minutes 57 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 38110de9f9d25d861cbc6cc4ff8c872c but expecting 251f3ab2efd7f1fd3feeb9b656d244b3
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 38110de9f9d25d861cbc6cc4ff8c872c but expecting 251f3ab2efd7f1fd3feeb9b656d244b3
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1p(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 483 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/483/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 147582 lines...]
[junit] 2010-12-08 06:38:21,796 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 36829
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 36829: exiting
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 36829: exiting
[junit] 2010-12-08 06:38:21,912 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 36829: exiting
[junit] 2010-12-08 06:38:21,913 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 36829
[junit] 2010-12-08 06:38:21,913 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 06:38:21,913 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-08 06:38:21,913 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:46398, storageID=DS-233984618-127.0.1.1-46398-1291790300916, infoPort=58049, ipcPort=36829):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-08 06:38:21,915 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-08 06:38:21,916 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-08 06:38:21,916 INFO datanode.DataNode (DataNode.java:run(1442)) - DatanodeRegistration(127.0.0.1:46398, storageID=DS-233984618-127.0.1.1-46398-1291790300916, infoPort=58049, ipcPort=36829):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-08 06:38:21,916 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 36829
[junit] 2010-12-08 06:38:21,917 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-08 06:38:21,917 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-08 06:38:21,917 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-08 06:38:21,918 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-08 06:38:22,020 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 06:38:22,020 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 3Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 8 7
[junit] 2010-12-08 06:38:22,020 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-08 06:38:22,021 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 36341
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 36341: exiting
[junit] 2010-12-08 06:38:22,023 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-08 06:38:22,023 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 36341
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 36341: exiting
[junit] 2010-12-08 06:38:22,022 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 36341: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 101.505 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 43 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 1d07b190fa86f83dec14b6f09f4be0b0 but expecting c465b6d13b27ceb1695ec8fda2737627
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 1d07b190fa86f83dec14b6f09f4be0b0 but expecting c465b6d13b27ceb1695ec8fda2737627
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1n(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 482 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/482/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1038 lines...]
[ivy:resolve] .............................................................................................................................................................................................................................. (331kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar (1502ms)
[ivy:resolve]
[ivy:resolve] :: problems summary ::
[ivy:resolve] :::: WARNINGS
[ivy:resolve] module not found: org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT
[ivy:resolve] ==== apache-snapshot: tried
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar:
[ivy:resolve] https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] ==== maven2: tried
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] -- artifact org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar:
[ivy:resolve] http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: UNRESOLVED DEPENDENCIES ::
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :: org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT: not found
[ivy:resolve] ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] :::: ERRORS
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/maven-metadata.xml
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.pom
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-SNAPSHOT.jar
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2.pom
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2.jar
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2-sources.jar
[ivy:resolve] SERVER ERROR: Bad Gateway url=https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/avro/1.3.2/avro-1.3.2-javadoc.jar
[ivy:resolve]
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:1716: impossible to resolve dependencies:
resolve failed - see output for details
Total time: 10 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 481 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/481/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 152903 lines...]
[junit] 2010-12-07 08:44:36,723 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54229
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 54229: exiting
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 54229: exiting
[junit] 2010-12-07 08:44:36,825 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 54229: exiting
[junit] 2010-12-07 08:44:36,826 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 54229
[junit] 2010-12-07 08:44:36,826 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-07 08:44:36,826 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-07 08:44:36,826 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:59864, storageID=DS-1317215496-127.0.1.1-59864-1291711475838, infoPort=60715, ipcPort=54229):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-07 08:44:36,828 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-07 08:44:36,829 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-07 08:44:36,829 INFO datanode.DataNode (DataNode.java:run(1442)) - DatanodeRegistration(127.0.0.1:59864, storageID=DS-1317215496-127.0.1.1-59864-1291711475838, infoPort=60715, ipcPort=54229):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-07 08:44:36,829 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 54229
[junit] 2010-12-07 08:44:36,829 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-07 08:44:36,830 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-07 08:44:36,830 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-07 08:44:36,830 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-07 08:44:36,942 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-07 08:44:36,942 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-07 08:44:36,943 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 1Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 7 8
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 47954
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 47954: exiting
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 47954
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 47954: exiting
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 47954: exiting
[junit] 2010-12-07 08:44:36,944 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 47954: exiting
[junit] 2010-12-07 08:44:36,945 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 47954: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 31.549 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 48 minutes 52 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 028f4b400a0f02aaace6ca8713c33f8e but expecting ee62ee5e0aa3186b2f7e7cc8028ec445
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 028f4b400a0f02aaace6ca8713c33f8e but expecting ee62ee5e0aa3186b2f7e7cc8028ec445
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1m(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 480 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/480/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 150884 lines...]
[junit] 2010-12-06 05:39:01,983 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-06 05:39:01,983 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
[junit] 2010-12-06 05:39:02,085 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44565
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 44565: exiting
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 44565: exiting
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 44565
[junit] 2010-12-06 05:39:02,086 WARN datanode.DataNode (DataXceiverServer.java:run(141)) - DatanodeRegistration(127.0.0.1:58128, storageID=DS-1524592330-127.0.1.1-58128-1291613941001, infoPort=38453, ipcPort=44565):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2010-12-06 05:39:02,086 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 44565: exiting
[junit] 2010-12-06 05:39:02,086 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-06 05:39:02,087 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
[junit] 2010-12-06 05:39:02,087 INFO datanode.DataNode (DataNode.java:run(1442)) - DatanodeRegistration(127.0.0.1:58128, storageID=DS-1524592330-127.0.1.1-58128-1291613941001, infoPort=38453, ipcPort=44565):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/data/data2/current/finalized'}
[junit] 2010-12-06 05:39:02,088 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 44565
[junit] 2010-12-06 05:39:02,088 INFO datanode.DataNode (DataNode.java:shutdown(768)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2010-12-06 05:39:02,088 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
[junit] 2010-12-06 05:39:02,088 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
[junit] 2010-12-06 05:39:02,089 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
[junit] 2010-12-06 05:39:02,190 WARN namenode.FSNamesystem (FSNamesystem.java:run(2822)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-06 05:39:02,190 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2010-12-06 05:39:02,191 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 12 Total time for transactions(ms): 0Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 12 6
[junit] 2010-12-06 05:39:02,192 INFO ipc.Server (Server.java:stop(1611)) - Stopping server on 58063
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 0 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 2 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 5 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 3 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 1 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 58063
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 8 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 9 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 4 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 7 on 58063: exiting
[junit] 2010-12-06 05:39:02,193 INFO ipc.Server (Server.java:run(1444)) - IPC Server handler 6 on 58063: exiting
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 128.781 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:680: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 52 minutes 28 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore
Error Message:
Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d28e4d9e9984bf03a84c64f929bee64e but expecting 74655f21050167571fffcde53aea434c
Stack Trace:
java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of d28e4d9e9984bf03a84c64f929bee64e but expecting 74655f21050167571fffcde53aea434c
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1063)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:702)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:600)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:477)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm4r1k(TestStorageRestore.java:316)
at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)
Hadoop-Hdfs-trunk-Commit - Build # 479 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/479/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-12-04 00:45:28,014 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-04 00:45:28,015 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-04 00:45:28,015 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-04 00:45:28,023 INFO common.Storage (FSImageFormat.java:write(474)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-04 00:45:28,026 INFO common.Storage (FSImageFormat.java:write(498)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-04 00:45:28,027 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-04 00:45:28,027 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-04 00:45:28,032 INFO common.Storage (FSImage.java:format(1339)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-04 00:45:28,033 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-04 00:45:28,034 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-04 00:45:28,034 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-04 00:45:28,034 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-04 00:45:28,035 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-04 00:45:28,035 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-04 00:45:28,038 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-04 00:45:28,039 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-04 00:45:28,039 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-04 00:45:28,039 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-04 00:45:28,040 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-04 00:45:28,040 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-04 00:45:28,041 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-04 00:45:28,041 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-04 00:45:28,042 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-04 00:45:28,043 INFO common.Storage (FSImageFormat.java:load(171)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-04 00:45:28,043 INFO common.Storage (FSImageFormat.java:load(174)) - Number of files = 1
[junit] 2010-12-04 00:45:28,044 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(342)) - Number of files under construction = 0
[junit] 2010-12-04 00:45:28,044 INFO common.Storage (FSImageFormat.java:load(195)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-04 00:45:28,044 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-04 00:45:28,045 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-04 00:45:28,045 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 13 msecs
[junit] 2010-12-04 00:45:28,046 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-04 00:45:28,046 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvxq(363)) - Running __CLR3_0_2q30srsvxq
[junit] 2010-12-04 00:45:28,047 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-04 00:45:28,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-04 00:45:28,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-04 00:45:28,055 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 2592387, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.424 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuon3(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1honl(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcboo4(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513oon(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpop0(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxopc(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 478 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/478/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-12-03 21:45:08,188 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-03 21:45:08,188 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-03 21:45:08,189 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-03 21:45:08,198 INFO common.Storage (FSImageFormat.java:write(474)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-03 21:45:08,201 INFO common.Storage (FSImageFormat.java:write(498)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-03 21:45:08,202 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-03 21:45:08,202 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-03 21:45:08,207 INFO common.Storage (FSImage.java:format(1339)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-03 21:45:08,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-03 21:45:08,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-03 21:45:08,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-03 21:45:08,209 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-03 21:45:08,209 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-03 21:45:08,209 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-03 21:45:08,210 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-03 21:45:08,210 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-03 21:45:08,210 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-03 21:45:08,216 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-03 21:45:08,216 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-03 21:45:08,217 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-03 21:45:08,217 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-03 21:45:08,218 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-03 21:45:08,219 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-03 21:45:08,219 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-03 21:45:08,220 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-03 21:45:08,220 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-03 21:45:08,222 INFO common.Storage (FSImageFormat.java:load(171)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-03 21:45:08,222 INFO common.Storage (FSImageFormat.java:load(174)) - Number of files = 1
[junit] 2010-12-03 21:45:08,223 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(342)) - Number of files under construction = 0
[junit] 2010-12-03 21:45:08,223 INFO common.Storage (FSImageFormat.java:load(195)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-03 21:45:08,224 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-03 21:45:08,267 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-03 21:45:08,268 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 61 msecs
[junit] 2010-12-03 21:45:08,268 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-03 21:45:08,269 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvxq(363)) - Running __CLR3_0_2q30srsvxq
[junit] 2010-12-03 21:45:08,271 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-03 21:45:08,271 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-03 21:45:08,271 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-03 21:45:08,278 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 18082301, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.607 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuon3(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1honl(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcboo4(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513oon(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpop0(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxopc(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 477 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/477/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4113 lines...]
[junit] 2010-12-02 03:06:11,229 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-02 03:06:11,239 INFO common.Storage (FSImageFormat.java:write(444)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-02 03:06:11,242 INFO common.Storage (FSImageFormat.java:write(468)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-02 03:06:11,243 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-02 03:06:11,243 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-02 03:06:11,251 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-02 03:06:11,251 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-02 03:06:11,252 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-02 03:06:11,252 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-02 03:06:11,252 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-02 03:06:11,253 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-02 03:06:11,253 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-02 03:06:11,253 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-02 03:06:11,253 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-02 03:06:11,254 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-02 03:06:11,261 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-02 03:06:11,261 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-02 03:06:11,262 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-02 03:06:11,262 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-02 03:06:11,263 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-02 03:06:11,263 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-02 03:06:11,264 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-02 03:06:11,264 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-02 03:06:11,265 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-02 03:06:11,266 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-02 03:06:11,267 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-12-02 03:06:11,267 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(311)) - Number of files under construction = 0
[junit] 2010-12-02 03:06:11,267 INFO common.Storage (FSImageFormat.java:load(286)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-02 03:06:11,268 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-02 03:06:11,318 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-02 03:06:11,319 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 68 msecs
[junit] 2010-12-02 03:06:11,319 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-02 03:06:11,320 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvx1(363)) - Running __CLR3_0_2q30srsvx1
[junit] 2010-12-02 03:06:11,321 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-02 03:06:11,322 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-02 03:06:11,322 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-02 03:06:11,328 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 31346136, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.445 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 39 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Error updating JIRA issues. Saving issues for next build.
com.atlassian.jira.rpc.exception.RemotePermissionException: This issue does not exist or you don't have permission to view it.
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuome(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homw(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonf(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513ony(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoob(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxoon(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 476 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/476/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-12-01 22:25:38,274 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-01 22:25:38,274 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-01 22:25:38,275 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-01 22:25:38,283 INFO common.Storage (FSImageFormat.java:write(444)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-01 22:25:38,286 INFO common.Storage (FSImageFormat.java:write(468)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-12-01 22:25:38,286 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-12-01 22:25:38,287 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-12-01 22:25:38,295 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-12-01 22:25:38,295 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-12-01 22:25:38,296 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-12-01 22:25:38,296 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-12-01 22:25:38,297 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-12-01 22:25:38,297 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-12-01 22:25:38,297 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-12-01 22:25:38,312 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-12-01 22:25:38,313 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-12-01 22:25:38,313 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-12-01 22:25:38,313 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-12-01 22:25:38,314 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-12-01 22:25:38,314 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-12-01 22:25:38,315 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-12-01 22:25:38,315 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-01 22:25:38,316 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(311)) - Number of files under construction = 0
[junit] 2010-12-01 22:25:38,317 INFO common.Storage (FSImageFormat.java:load(286)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-12-01 22:25:38,318 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-12-01 22:25:38,318 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-12-01 22:25:38,319 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 24 msecs
[junit] 2010-12-01 22:25:38,319 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-12-01 22:25:38,320 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvx1(363)) - Running __CLR3_0_2q30srsvx1
[junit] 2010-12-01 22:25:38,321 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-12-01 22:25:38,321 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-12-01 22:25:38,321 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-12-01 22:25:38,327 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 15580729, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.434 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 57 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuome(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homw(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonf(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513ony(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoob(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxoon(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 475 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/475/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4112 lines...]
[junit] 2010-11-30 06:24:29,034 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 06:24:29,035 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 06:24:29,035 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 06:24:29,044 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 06:24:29,047 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-30 06:24:29,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-30 06:24:29,048 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-30 06:24:29,055 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-30 06:24:29,056 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-30 06:24:29,056 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-30 06:24:29,056 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-30 06:24:29,057 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-30 06:24:29,057 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-30 06:24:29,057 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-30 06:24:29,057 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-30 06:24:29,058 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-30 06:24:29,058 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-30 06:24:29,061 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-30 06:24:29,062 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-30 06:24:29,062 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 06:24:29,063 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 06:24:29,063 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-30 06:24:29,064 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-30 06:24:29,064 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 06:24:29,065 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 06:24:29,065 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 06:24:29,066 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 06:24:29,067 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-30 06:24:29,067 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-30 06:24:29,067 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-30 06:24:29,068 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-30 06:24:29,069 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-30 06:24:29,069 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-30 06:24:29,069 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-30 06:24:29,070 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-30 06:24:29,071 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-30 06:24:29,072 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-30 06:24:29,072 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-30 06:24:29,079 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 4148925, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.517 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 5 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 474 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/474/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-11-30 05:58:01,189 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 05:58:01,189 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 05:58:01,189 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 05:58:01,197 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 05:58:01,200 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-30 05:58:01,201 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-30 05:58:01,201 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-30 05:58:01,207 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-30 05:58:01,208 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-30 05:58:01,209 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-30 05:58:01,209 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-30 05:58:01,209 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-30 05:58:01,209 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-30 05:58:01,210 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-30 05:58:01,213 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-30 05:58:01,214 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-30 05:58:01,214 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-30 05:58:01,214 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-30 05:58:01,215 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-30 05:58:01,215 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-30 05:58:01,216 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-30 05:58:01,216 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 05:58:01,217 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-30 05:58:01,218 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-30 05:58:01,219 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-30 05:58:01,219 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-30 05:58:01,219 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-30 05:58:01,220 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-30 05:58:01,220 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-30 05:58:01,221 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-30 05:58:01,221 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-30 05:58:01,222 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-30 05:58:01,223 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-30 05:58:01,223 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-30 05:58:01,224 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-30 05:58:01,231 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 23525817, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.414 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 5 minutes 19 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 473 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/473/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-11-29 07:37:02,860 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 07:37:02,860 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 07:37:02,861 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 07:37:02,868 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 07:37:02,871 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-29 07:37:02,872 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-29 07:37:02,872 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-29 07:37:02,877 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-29 07:37:02,878 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-29 07:37:02,879 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-29 07:37:02,879 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-29 07:37:02,883 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-29 07:37:02,883 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-29 07:37:02,884 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 07:37:02,884 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 07:37:02,884 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-29 07:37:02,885 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-29 07:37:02,885 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 07:37:02,886 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 07:37:02,886 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 07:37:02,887 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 07:37:02,887 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-29 07:37:02,888 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-29 07:37:02,888 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-29 07:37:02,888 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-29 07:37:02,889 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-29 07:37:02,889 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 12 msecs
[junit] 2010-11-29 07:37:02,890 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-29 07:37:02,890 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-29 07:37:02,891 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-29 07:37:02,892 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-29 07:37:02,892 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-29 07:37:02,898 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 9975050, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.625 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 472 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/472/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4111 lines...]
[junit] 2010-11-29 02:56:42,930 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 02:56:42,931 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 02:56:42,931 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 02:56:42,940 INFO common.Storage (FSImageFormat.java:write(441)) - Saving image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 02:56:42,943 INFO common.Storage (FSImageFormat.java:write(465)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-29 02:56:42,944 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-29 02:56:42,944 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-29 02:56:42,949 INFO common.Storage (FSImage.java:format(1338)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-29 02:56:42,950 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-29 02:56:42,950 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-29 02:56:42,951 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-29 02:56:42,951 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-29 02:56:42,951 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-29 02:56:42,951 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-29 02:56:42,952 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-29 02:56:42,952 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-29 02:56:42,952 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-29 02:56:42,956 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-29 02:56:42,956 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-29 02:56:42,957 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-29 02:56:42,957 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-29 02:56:42,958 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-29 02:56:42,958 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-29 02:56:42,959 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-29 02:56:42,959 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 02:56:42,960 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-29 02:56:42,961 INFO common.Storage (FSImageFormat.java:load(170)) - Loading image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/fsimage using no compression
[junit] 2010-11-29 02:56:42,961 INFO common.Storage (FSImageFormat.java:load(176)) - Number of files = 1
[junit] 2010-11-29 02:56:42,961 INFO common.Storage (FSImageFormat.java:loadFilesUnderConstruction(308)) - Number of files under construction = 0
[junit] 2010-11-29 02:56:42,962 INFO common.Storage (FSImageFormat.java:load(283)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-29 02:56:42,962 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-29 02:56:42,963 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-29 02:56:42,963 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 13 msecs
[junit] 2010-11-29 02:56:42,963 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-29 02:56:42,964 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvwy(363)) - Running __CLR3_0_2q30srsvwy
[junit] 2010-11-29 02:56:42,966 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-29 02:56:42,966 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-29 02:56:42,966 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-29 02:56:42,973 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 9975050, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.557 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 2 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuomb(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1homt(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbonc(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513onv(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpoo8(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxook(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Hadoop-Hdfs-trunk-Commit - Build # 471 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/471/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1219 lines...]
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] byte [] name = FSImageSerialization.readBytes(in);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:249: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] v.visit(ImageElement.CLIENT_NAME, FSImageSerialization.readString(in));
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:250: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] v.visit(ImageElement.CLIENT_MACHINE, FSImageSerialization.readString(in));
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:261: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] FSImageSerialization.readString(in);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:262: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] FSImageSerialization.readString(in);
[javac] ^
[javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java:340: cannot find symbol
[javac] symbol : variable FSImageSerialization
[javac] location: class org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent
[javac] v.visit(ImageElement.INODE_PATH, FSImageSerialization.readString(in));
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 39 errors
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:335: Compile failed; see the compiler error output for details.
Total time: 41 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Hadoop-Hdfs-trunk-Commit - Build # 470 - Still Failing
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/470/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4086 lines...]
[junit] 2010-11-24 23:05:18,111 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-24 23:05:18,111 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-24 23:05:18,112 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-24 23:05:18,112 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-24 23:05:18,112 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-24 23:05:18,116 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-24 23:05:18,117 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-24 23:05:18,117 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-24 23:05:18,123 INFO common.Storage (FSImage.java:format(1639)) - Storage directory /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name has been successfully formatted.
[junit] 2010-11-24 23:05:18,124 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-24 23:05:18,124 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-24 23:05:18,125 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-24 23:05:18,125 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-24 23:05:18,125 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-24 23:05:18,126 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-24 23:05:18,130 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-24 23:05:18,130 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-24 23:05:18,131 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-24 23:05:18,131 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-24 23:05:18,132 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-24 23:05:18,132 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-24 23:05:18,134 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-24 23:05:18,134 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-24 23:05:18,135 WARN common.Util (Util.java:stringAsURI(63)) - Path /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-24 23:05:18,136 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-24 23:05:18,137 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-24 23:05:18,137 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-24 23:05:18,137 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-24 23:05:18,138 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-24 23:05:18,138 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-24 23:05:18,139 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-24 23:05:18,140 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvuz(363)) - Running __CLR3_0_2q30srsvuz
[junit] 2010-11-24 23:05:18,141 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-24 23:05:18,141 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-24 23:05:18,142 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-24 23:05:18,149 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 11068806, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.517 sec
checkfailure:
[touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:674: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:637: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:705: Tests failed!
Total time: 1 minute 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2lttijuokc(TestBlockRecovery.java:165)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedReplicas(TestBlockRecovery.java:153)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2c2lg1hoku(TestBlockRecovery.java:204)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRbwReplicas(TestBlockRecovery.java:190)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_29tewcbold(TestBlockRecovery.java:243)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testFinalizedRwrReplicas(TestBlockRecovery.java:229)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2cqk513olw(TestBlockRecovery.java:281)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBWReplicas(TestBlockRecovery.java:269)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2396azpom9(TestBlockRecovery.java:305)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRBW_RWRReplicas(TestBlockRecovery.java:293)
FAILED: org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas
Error Message:
null
Stack Trace:
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1883)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testSyncReplicas(TestBlockRecovery.java:144)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.__CLR3_0_2ahdlbxoml(TestBlockRecovery.java:329)
at org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRWRReplicas(TestBlockRecovery.java:317)
Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #469
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/469/>
------------------------------------------
[...truncated 2441 lines...]
ivy-init:
ivy-resolve-common:
ivy-retrieve-common:
init:
[touch] Creating /tmp/null1981533946
[delete] Deleting: /tmp/null1981533946
compile-hdfs-classes:
[paranamer] Generating parameter names from <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/protocol> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[paranamer] Generating parameter names from <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/server/protocol> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
compile-core:
jar:
[jar] Building jar: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/hadoop-hdfs-0.23.0-SNAPSHOT.jar>
findbugs:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/findbugs>
[findbugs] Executing findbugs from ant task
[findbugs] Running FindBugs...
[findbugs] Calculating exit code...
[findbugs] Exit code set to: 0
[findbugs] Output saved to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/findbugs/hadoop-findbugs-report.xml>
[xslt] Processing <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/findbugs/hadoop-findbugs-report.xml> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/findbugs/hadoop-findbugs-report.html>
[xslt] Loading stylesheet /homes/gkesavan/tools/findbugs/latest/src/xsl/default.xsl
BUILD SUCCESSFUL
Total time: 6 minutes 0 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
======================================================================
======================================================================
CLEAN: cleaning workspace
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
[delete] Deleting directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/contrib/hdfsproxy>
clean:
[echo] contrib: thriftfs
[delete] Deleting directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/contrib/thriftfs>
clean-fi:
[delete] Deleting directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build-fi>
clean-sign:
clean:
[delete] Deleting directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build>
[delete] Deleting directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/docs/build>
[delete] Deleting: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/hadoop-hdfs.xml>
[delete] Deleting: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/hadoop-hdfs-test.xml>
[delete] Deleting: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/hadoop-hdfs-instrumented.xml>
[delete] Deleting: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/hadoop-hdfs-instrumented-test.xml>
BUILD SUCCESSFUL
Total time: 1 second
======================================================================
======================================================================
ANALYSIS: ant -Drun.clover=true clover checkstyle run-commit-test generate-clover-reports -Dtest.junit.output.format=xml -Dtest.output=no -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clover.setup:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/clover/db>
[clover-setup] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-setup] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-setup] Clover: Open Source License registered to Apache.
[clover-setup] Clover is enabled with initstring '<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/clover/db/hadoop_coverage.db'>
[echo] HDFS-783: test-libhdfs is disabled for Clover'ed builds
clover.info:
clover:
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivy-2.1.0.jar>
[get] Not modified - so not downloaded
ivy-init-dirs:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy/lib>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy/report>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy/maven>
ivy-probe-antlib:
ivy-init-antlib:
ivy-init:
[ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivysettings.xml>
ivy-resolve-checkstyle:
ivy-retrieve-checkstyle:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivysettings.xml>
check-for-checkstyle:
checkstyle:
[checkstyle] Running Checkstyle 4.2 on 197 files
[xslt] Processing <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/469/artifact/trunk/build/test/checkstyle-errors.xml> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/469/artifact/trunk/build/test/checkstyle-errors.html>
[xslt] Loading stylesheet <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/test/checkstyle-noframes-sorted.xsl>
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivy-2.1.0.jar>
[get] Not modified - so not downloaded
ivy-init-dirs:
ivy-probe-antlib:
ivy-init-antlib:
ivy-init:
ivy-resolve-common:
ivy-retrieve-common:
init:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/src>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps/hdfs/WEB-INF>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps/datanode/WEB-INF>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps/secondary/WEB-INF>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ant>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/c++>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/hdfs/classes>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/extraconf>
[touch] Creating /tmp/null242765010
[delete] Deleting: /tmp/null242765010
[copy] Copying 3 files to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps>
compile-hdfs-classes:
[javac] Compiling 207 source files to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[clover] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover] Clover: Open Source License registered to Apache.
[clover] Creating new database at '<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/clover/db/hadoop_coverage.db'.>
[clover] Processing files at 1.6 source level.
[clover] Clover all over. Instrumented 197 files (15 packages).
[clover] Elapsed time = 2.033 secs. (96.901 files/sec, 30,690.113 srclines/sec)
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[paranamer] Generating parameter names from <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/protocol> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[paranamer] Generating parameter names from <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/server/protocol> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
ivy-resolve-test:
ivy-retrieve-test:
compile-hdfs-test:
[javac] Compiling 186 source files to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/hdfs/classes>
[clover] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover] Clover: Open Source License registered to Apache.
[clover] Updating existing database at '<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/clover/db/hadoop_coverage.db'.>
[clover] Processing files at 1.6 source level.
[clover] Clover all over. Instrumented 186 files (18 packages).
[clover] 421 test methods detected.
[clover] Elapsed time = 1.49 secs. (124.832 files/sec, 29,434.229 srclines/sec)
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/cache>
run-commit-test:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/logs>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/extraconf>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/extraconf>
[junit] WARNING: multiple versions of ant detected in path for junit
[junit] jar:file:/homes/hudson/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit] and jar:file:/homes/hudson/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
[junit] Tests run: 12, Failures: 0, Errors: 6, Time elapsed: 5.07 sec
[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.479 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.46 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.241 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestNNLeaseRecovery
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.311 sec
checkfailure:
[touch] Creating <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/testsfailed>
BUILD FAILED
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:674: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:637: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:705: Tests failed!
Total time: 34 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #468
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/468/changes>
Changes:
[cos] HDFS-1516. mvn-install is broken after 0.22 branch creation. Contributed by Konstantin Boudnik.
------------------------------------------
[...truncated 3893 lines...]
[junit] 2010-11-23 23:16:30,094 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,094 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,098 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,098 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,099 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,099 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,100 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,100 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 23:16:30,102 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,102 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,103 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,104 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 23:16:30,105 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 23:16:30,105 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,105 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,106 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 23:16:30,106 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-23 23:16:30,107 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 23:16:30,107 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2cpnfswvsm(253)) - Running __CLR3_0_2cpnfswvsm
[junit] 2010-11-23 23:16:30,109 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 23:16:30,109 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 23:16:30,109 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 23:16:30,117 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 20245380, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 23:16:30,138 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 23:16:30,138 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,139 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,139 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,140 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,141 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,141 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,141 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,141 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,142 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,142 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,142 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,143 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,149 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,150 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,150 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,150 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,151 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,154 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 23:16:30,156 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 23:16:30,156 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 23:16:30,163 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 23:16:30,164 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,165 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,165 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,165 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,166 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,166 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,166 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,167 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,167 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,182 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,183 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,183 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,183 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,184 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,185 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 23:16:30,187 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,187 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,188 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,189 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 23:16:30,189 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 23:16:30,190 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,190 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,191 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 23:16:30,191 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 28 msecs
[junit] 2010-11-23 23:16:30,191 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 23:16:30,192 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2xseoacvt1(279)) - Running __CLR3_0_2xseoacvt1
[junit] 2010-11-23 23:16:30,194 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 23:16:30,194 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 23:16:30,194 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 23:16:30,202 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 26918187, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 23:16:30,221 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 23:16:30,222 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,222 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,223 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,224 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,224 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,224 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,225 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,225 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,225 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,225 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,226 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,226 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,229 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,230 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,230 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,230 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,231 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,234 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 23:16:30,235 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 23:16:30,236 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 23:16:30,241 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 23:16:30,242 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,242 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,242 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,242 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,243 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,243 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,243 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,243 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,244 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,247 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,248 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,248 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,248 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,249 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,249 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 23:16:30,251 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,252 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,252 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,253 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 23:16:30,254 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 23:16:30,254 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,254 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,255 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 23:16:30,255 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-23 23:16:30,256 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 23:16:30,257 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2wnrgefvth(307)) - Running __CLR3_0_2wnrgefvth
[junit] 2010-11-23 23:16:30,258 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 23:16:30,258 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 23:16:30,259 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 23:16:30,266 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 5009874, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 23:16:30,285 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 23:16:30,286 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,287 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,287 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,288 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,288 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,289 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,289 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,289 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,290 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,290 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,290 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,291 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,307 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,307 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,307 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,308 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,308 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,312 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 23:16:30,313 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 23:16:30,313 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 23:16:30,320 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 23:16:30,321 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,322 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,322 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,322 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,322 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,323 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,323 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,323 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,323 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,327 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,328 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,328 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,328 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,329 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,329 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 23:16:30,331 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,331 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,332 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,333 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 23:16:30,334 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 23:16:30,334 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,334 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,335 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 23:16:30,335 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 14 msecs
[junit] 2010-11-23 23:16:30,336 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 23:16:30,336 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2f0vsivvtx(335)) - Running __CLR3_0_2f0vsivvtx
[junit] 2010-11-23 23:16:30,338 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 23:16:30,338 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 23:16:30,338 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 23:16:30,345 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 25591289, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 23:16:30,364 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 23:16:30,365 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,365 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,366 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,367 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,367 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,367 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,367 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,368 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,368 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,368 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,368 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,369 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,372 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,373 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,373 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,373 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,374 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,377 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 23:16:30,378 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 23:16:30,378 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 23:16:30,383 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 23:16:30,384 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 23:16:30,385 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 23:16:30,385 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 23:16:30,385 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 23:16:30,385 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 23:16:30,386 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 23:16:30,386 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 23:16:30,386 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 23:16:30,387 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 23:16:30,393 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 23:16:30,394 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 23:16:30,394 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 23:16:30,394 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 23:16:30,395 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 23:16:30,395 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 23:16:30,397 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 23:16:30,398 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,398 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 23:16:30,400 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 23:16:30,400 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 23:16:30,401 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,401 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 23:16:30,402 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 23:16:30,403 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 19 msecs
[junit] 2010-11-23 23:16:30,403 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 23:16:30,404 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvud(363)) - Running __CLR3_0_2q30srsvud
[junit] 2010-11-23 23:16:30,405 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 23:16:30,406 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 23:16:30,406 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 23:16:30,413 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 24769387, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.367 sec
checkfailure:
[touch] Creating <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/testsfailed>
BUILD FAILED
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:674: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:637: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:705: Tests failed!
Total time: 57 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #467
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/467/changes>
Changes:
[hairong] HDFS-1482. Add listCorruptFileBlocks to DistributedFileSystem. Contributed by Patrick Kling.
------------------------------------------
[...truncated 901 lines...]
A bin/hdfs-config.sh
AU bin/start-dfs.sh
AU bin/stop-balancer.sh
AU bin/hdfs
A bin/stop-secure-dns.sh
AU bin/stop-dfs.sh
AU bin/start-balancer.sh
A bin/start-secure-dns.sh
AU build.xml
U .
Fetching 'https://svn.apache.org/repos/asf/hadoop/common/trunk/src/test/bin' at -1 into '<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/test/bin'>
AU src/test/bin/test-patch.sh
At revision 1038227
At revision 1038226
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
A commitBuild.sh
A hudsonEnv.sh
AU hudsonBuildHadoopNightly.sh
AU hudsonBuildHadoopPatch.sh
AU hudsonBuildHadoopRelease.sh
AU processHadoopPatchEmailRemote.sh
AU hudsonPatchQueueAdmin.sh
AU processHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1038227
no change for http://svn.apache.org/repos/asf/hadoop/nightly since the previous build
no change for https://svn.apache.org/repos/asf/hadoop/common/trunk/src/test/bin since the previous build
[Hadoop-Hdfs-trunk-Commit] $ /bin/bash /tmp/hudson873403136010013923.sh
======================================================================
======================================================================
CLEAN: cleaning workspace
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
BUILD SUCCESSFUL
Total time: 0 seconds
======================================================================
======================================================================
BUILD: ant veryclean mvn-deploy tar findbugs -Dtest.junit.output.format=xml -Dtest.output=yes -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Djava5.home=$JAVA5_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME
======================================================================
======================================================================
Buildfile: build.xml
clean-contrib:
clean:
check-libhdfs-fuse:
clean:
Trying to override old definition of task macro_tar
clean:
[echo] contrib: hdfsproxy
clean:
[echo] contrib: thriftfs
clean-fi:
clean-sign:
clean:
clean-cache:
[delete] Deleting directory /homes/hudson/.ivy2/cache/org.apache.hadoop
veryclean:
ant-task-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/maven/maven-ant-tasks/2.0.10/maven-ant-tasks-2.0.10.jar
[get] To: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/maven-ant-tasks-2.0.10.jar>
mvn-taskdef:
clover.setup:
clover.info:
clover:
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivy-2.1.0.jar>
ivy-init-dirs:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy/lib>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy/report>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ivy/maven>
ivy-probe-antlib:
ivy-init-antlib:
ivy-init:
[ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivysettings.xml>
ivy-resolve-common:
[ivy:resolve] downloading https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.22.0-SNAPSHOT/hadoop-common-0.22.0-20101119.063222-143.jar ...
[ivy:resolve] ...................................................................................................................................................................................................... (1339kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] [SUCCESSFUL ] org.apache.hadoop#hadoop-common;0.22.0-SNAPSHOT!hadoop-common.jar (551ms)
[ivy:resolve] downloading http://repo1.maven.org/maven2/org/apache/hadoop/avro/1.3.2/avro-1.3.2.jar ...
[ivy:resolve] ................................................................................................................................................................................................................................... (331kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar (1508ms)
ivy-retrieve-common:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/ivy/ivysettings.xml>
init:
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/src>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps/hdfs/WEB-INF>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps/datanode/WEB-INF>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps/secondary/WEB-INF>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/ant>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/c++>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/hdfs/classes>
[mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/extraconf>
[touch] Creating /tmp/null2077227495
[delete] Deleting: /tmp/null2077227495
[copy] Copying 3 files to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/webapps>
[copy] Copying 1 file to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/conf>
[copy] Copying <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/conf/hdfs-site.xml.template> to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/conf/hdfs-site.xml>
compile-hdfs-classes:
[javac] Compiling 207 source files to <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/classes>
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/DFSClient.java>:64: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: package org.apache.hadoop.fs
[javac] import org.apache.hadoop.fs.CorruptFileBlocks;
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/fs/Hdfs.java>:310: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: class org.apache.hadoop.fs.Hdfs
[javac] public CorruptFileBlocks listCorruptFileBlocks(String path,
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java>:35: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: package org.apache.hadoop.fs
[javac] import org.apache.hadoop.fs.CorruptFileBlocks;
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/DFSClient.java>:1124: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: class org.apache.hadoop.hdfs.DFSClient
[javac] public CorruptFileBlocks listCorruptFileBlocks(String path,
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java>:671: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: interface org.apache.hadoop.hdfs.protocol.ClientProtocol
[javac] public CorruptFileBlocks
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/DistributedFileSystem.java>:46: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: package org.apache.hadoop.fs
[javac] import org.apache.hadoop.fs.CorruptFileBlocks;
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/DistributedFileSystem.java>:609: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: class org.apache.hadoop.hdfs.DistributedFileSystem
[javac] public CorruptFileBlocks listCorruptFileBlocks(String path,
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java>:46: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: package org.apache.hadoop.fs
[javac] import org.apache.hadoop.fs.CorruptFileBlocks;
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java>:1132: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: class org.apache.hadoop.hdfs.server.namenode.NameNode
[javac] public CorruptFileBlocks
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/fs/Hdfs.java>:309: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/DistributedFileSystem.java>:608: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/src/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java>:1145: cannot find symbol
[javac] symbol : class CorruptFileBlocks
[javac] location: class org.apache.hadoop.hdfs.server.namenode.NameNode
[javac] return new CorruptFileBlocks(files, lastCookie);
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 12 errors
BUILD FAILED
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:336: Compile failed; see the compiler error output for details.
Total time: 14 seconds
======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================
mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Build failed in Hudson: Hadoop-Hdfs-trunk-Commit #466
Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/466/changes>
Changes:
[hairong] HDFS-1481. NameNode should validate fsimage before rolling. Contributed by Hairong Kuang.
------------------------------------------
[...truncated 3893 lines...]
[junit] 2010-11-23 07:19:15,492 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,492 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,495 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,496 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,496 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,496 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,497 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,497 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 07:19:15,499 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,499 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,500 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,501 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 07:19:15,501 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 07:19:15,502 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,614 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,615 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 07:19:15,615 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 126 msecs
[junit] 2010-11-23 07:19:15,616 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 07:19:15,616 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2cpnfswvqn(253)) - Running __CLR3_0_2cpnfswvqn
[junit] 2010-11-23 07:19:15,618 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 07:19:15,618 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 07:19:15,619 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 07:19:15,626 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 5683514, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 07:19:15,646 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 07:19:15,647 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,647 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,648 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,649 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,649 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,649 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,650 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,650 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,650 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,650 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,651 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,651 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,654 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,655 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,655 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,656 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,656 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,660 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 07:19:15,661 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 07:19:15,661 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 07:19:15,668 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 07:19:15,669 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,669 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,670 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,670 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,670 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,670 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,671 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,671 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,671 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,675 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,675 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,676 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,676 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,677 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,678 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 07:19:15,679 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,680 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,680 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,682 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 07:19:15,682 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 07:19:15,683 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,683 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,684 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 07:19:15,684 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 16 msecs
[junit] 2010-11-23 07:19:15,684 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 07:19:15,685 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2xseoacvr2(279)) - Running __CLR3_0_2xseoacvr2
[junit] 2010-11-23 07:19:15,687 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 07:19:15,687 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 07:19:15,687 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 07:19:15,694 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 20545116, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 07:19:15,713 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 07:19:15,714 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,715 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,715 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,716 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,716 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,717 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,717 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,717 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,718 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,718 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,718 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,719 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,722 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,723 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,723 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,724 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,724 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,729 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 07:19:15,730 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 07:19:15,731 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 07:19:15,736 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 07:19:15,737 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,737 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,738 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,738 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,738 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,739 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,739 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,739 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,740 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,744 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,745 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,745 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,745 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,746 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,747 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 07:19:15,748 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,749 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,749 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,751 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 07:19:15,751 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 07:19:15,752 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,752 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,753 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 07:19:15,753 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 17 msecs
[junit] 2010-11-23 07:19:15,753 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 07:19:15,754 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2wnrgefvri(307)) - Running __CLR3_0_2wnrgefvri
[junit] 2010-11-23 07:19:15,756 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 07:19:15,756 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 07:19:15,756 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 07:19:15,763 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 25591289, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 07:19:15,782 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 07:19:15,783 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,783 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,784 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,785 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,785 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,785 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,785 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,786 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,786 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,786 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,787 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,787 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,790 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,791 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,791 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,792 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,792 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,796 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 07:19:15,797 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 07:19:15,797 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 07:19:15,803 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 07:19:15,803 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,804 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,804 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,804 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,805 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,805 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,805 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,806 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,806 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,810 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,810 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,810 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,811 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,811 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,812 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 07:19:15,814 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,814 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,815 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,816 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 07:19:15,816 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 07:19:15,817 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,817 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,818 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 07:19:15,818 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 15 msecs
[junit] 2010-11-23 07:19:15,819 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 07:19:15,819 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2f0vsivvry(335)) - Running __CLR3_0_2f0vsivvry
[junit] 2010-11-23 07:19:15,821 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 07:19:15,821 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 07:19:15,822 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 07:19:15,829 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 23342038, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] 2010-11-23 07:19:15,847 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=NameNode, sessionId=null - already initialized
[junit] 2010-11-23 07:19:15,848 INFO metrics.NameNodeMetrics (NameNodeMetrics.java:<init>(113)) - Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,849 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,849 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,850 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,850 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,851 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,851 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,851 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,852 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,852 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,852 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,853 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,860 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,860 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,861 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,861 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,861 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,865 INFO common.Storage (FSImage.java:saveFSImage(1412)) - Image file of size 113 saved in 0 seconds.
[junit] 2010-11-23 07:19:15,909 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 0
[junit] 2010-11-23 07:19:15,909 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049088 written 512 bytes at offset 1048576
[junit] 2010-11-23 07:19:15,914 INFO common.Storage (FSImage.java:format(1639)) - Storage directory <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> has been successfully formatted.
[junit] 2010-11-23 07:19:15,915 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(173)) - defaultReplication = 3
[junit] 2010-11-23 07:19:15,916 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(174)) - maxReplication = 512
[junit] 2010-11-23 07:19:15,916 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(175)) - minReplication = 1
[junit] 2010-11-23 07:19:15,916 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(176)) - maxReplicationStreams = 2
[junit] 2010-11-23 07:19:15,917 INFO namenode.FSNamesystem (BlockManager.java:setConfigurationParameters(177)) - shouldCheckForEnoughRacks = false
[junit] 2010-11-23 07:19:15,917 INFO util.GSet (BlocksMap.java:computeCapacity(84)) - VM type = 32-bit
[junit] 2010-11-23 07:19:15,917 INFO util.GSet (BlocksMap.java:computeCapacity(85)) - 2% max memory = 9.86125 MB
[junit] 2010-11-23 07:19:15,917 INFO util.GSet (BlocksMap.java:computeCapacity(86)) - capacity = 2^21 = 2097152 entries
[junit] 2010-11-23 07:19:15,918 INFO util.GSet (LightWeightGSet.java:<init>(82)) - recommended=2097152, actual=2097152
[junit] 2010-11-23 07:19:15,921 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(460)) - fsOwner=hudson
[junit] 2010-11-23 07:19:15,922 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(466)) - supergroup=supergroup
[junit] 2010-11-23 07:19:15,922 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(467)) - isPermissionEnabled=false
[junit] 2010-11-23 07:19:15,923 INFO namenode.FSNamesystem (FSNamesystem.java:setConfigurationParameters(508)) - isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
[junit] 2010-11-23 07:19:15,923 INFO metrics.FSNamesystemMetrics (FSNamesystemMetrics.java:<init>(80)) - Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
[junit] 2010-11-23 07:19:15,924 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4469)) - Registered FSNamesystemStatusMBean
[junit] 2010-11-23 07:19:15,926 INFO namenode.NameNode (FSDirectory.java:<init>(125)) - Caching file names occuring more than 10 times
[junit] 2010-11-23 07:19:15,926 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,927 WARN common.Util (Util.java:stringAsURI(63)) - Path <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name> should be specified as a URI in configuration files. Please update hdfs configuration.
[junit] 2010-11-23 07:19:15,928 INFO common.Storage (FSImage.java:loadFSImage(1175)) - Number of files = 1
[junit] 2010-11-23 07:19:15,928 INFO common.Storage (FSImage.java:loadFilesUnderConstruction(1755)) - Number of files under construction = 0
[junit] 2010-11-23 07:19:15,929 INFO common.Storage (FSImage.java:loadFSImage(1289)) - Image file of size 113 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,929 INFO common.Storage (FSEditLogLoader.java:loadFSEdits(61)) - Edits file <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/data/dfs/name/current/edits> of size 4 edits # 0 loaded in 0 seconds.
[junit] 2010-11-23 07:19:15,930 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
[junit] 2010-11-23 07:19:15,930 INFO namenode.FSNamesystem (FSNamesystem.java:initialize(309)) - Finished loading FSImage in 15 msecs
[junit] 2010-11-23 07:19:15,931 INFO util.HostsFileReader (HostsFileReader.java:refresh(85)) - Refreshing hosts (include/exclude) list
[junit] 2010-11-23 07:19:15,931 DEBUG namenode.TestNNLeaseRecovery (TestNNLeaseRecovery.java:__CLR3_0_2q30srsvse(363)) - Running __CLR3_0_2q30srsvse
[junit] 2010-11-23 07:19:15,933 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(631)) - Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
[junit] 2010-11-23 07:19:15,933 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(173)) - Preallocating Edit log, current size 4
[junit] 2010-11-23 07:19:15,934 DEBUG namenode.FSNamesystem (EditLogFileOutputStream.java:preallocate(180)) - Edit log size is now 1049092 written 512 bytes at offset 1048580
[junit] 2010-11-23 07:19:15,940 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2326)) - commitBlockSynchronization(lastblock=Mock for BlockInfoUnderConstruction, hashCode: 25862088, newgenerationstamp=2002, newlength=273487234, newtargets=[null], closeFile=true, deleteBlock=false)
[junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 2.524 sec
checkfailure:
[touch] Creating <https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build/test/testsfailed>
BUILD FAILED
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:675: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:638: The following error occurred while executing this line:
<https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/ws/trunk/build.xml>:706: Tests failed!
Total time: 55 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure