You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2012/10/31 13:51:52 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #1212

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/changes>

Changes:

[suresh] HDFS-4129. Add utility methods to dump NameNode in memory tree for testing. Contributed by Tsz Wo (Nicholas), SZE.

[suresh] HDFS-3916. libwebhdfs testing code cleanup. Contributed by Jing Zhao.

[umamahesh] Moved HDFS-3809 entry in CHANGES.txt from trunk to 2.0.3-alpha section

[umamahesh] Moved HDFS-3789 entry in CHANGES.txt from trunk to 2.0.3-alpha section

[umamahesh] Moved HDFS-3695 entry in CHANGES.txt from trunk to 2.0.3-alpha section

[bobby] HADOOP-8986. Server$Call object is never released after it is sent (bobby)

[umamahesh] Moved HDFS-3573 entry in CHANGES.txt from trunk to 2.0.3-alpha section

[daryn] HADOOP-8994. TestDFSShell creates file named "noFileHere", making further tests hard to understand (Andy Isaacson via daryn)

------------------------------------------
[...truncated 11290 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.359 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.415 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.861 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.035 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.899 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.804 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.357 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.226 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.104 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.75 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.423 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.952 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.093 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.594 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.742 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.037 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.411 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.445 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.216 sec

Results :

Tests run: 1610, Failures: 0, Errors: 0, Skipped: 4

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (native_tests) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
     [exec] 2012-10-31 12:51:45,621 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(319)) - starting cluster with 1 namenodes.
     [exec] Formatting using clusterid: testClusterID
     [exec] 2012-10-31 12:51:45,868 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list
     [exec] 2012-10-31 12:51:45,870 WARN  conf.Configuration (Configuration.java:warnOnceIfDeprecated(822)) - hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
     [exec] 2012-10-31 12:51:45,870 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000
     [exec] 2012-10-31 12:51:45,890 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication         = 1
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication             = 512
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication             = 1
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams      = 2
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks  = false
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000
     [exec] 2012-10-31 12:51:45,891 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer        = false
     [exec] 2012-10-31 12:51:45,891 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-10-31 12:51:45,892 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup          = supergroup
     [exec] 2012-10-31 12:51:45,892 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true
     [exec] 2012-10-31 12:51:45,892 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false
     [exec] 2012-10-31 12:51:45,896 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true
     [exec] 2012-10-31 12:51:46,107 INFO  namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times
     [exec] 2012-10-31 12:51:46,108 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-10-31 12:51:46,109 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-10-31 12:51:46,109 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension     = 0
     [exec] 2012-10-31 12:51:47,214 INFO  common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1> has been successfully formatted.
     [exec] 2012-10-31 12:51:47,222 INFO  common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2> has been successfully formatted.
     [exec] 2012-10-31 12:51:47,233 INFO  namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000> using no compression
     [exec] 2012-10-31 12:51:47,233 INFO  namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000> using no compression
     [exec] 2012-10-31 12:51:47,243 INFO  namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds.
     [exec] 2012-10-31 12:51:47,245 INFO  namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds.
     [exec] 2012-10-31 12:51:47,268 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(171)) - Going to retain 1 images with txid >= 0
     [exec] 2012-10-31 12:51:47,315 WARN  impl.MetricsConfig (MetricsConfig.java:loadFirst(123)) - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
     [exec] 2012-10-31 12:51:47,363 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(341)) - Scheduled snapshot period at 10 second(s).
     [exec] 2012-10-31 12:51:47,363 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) - NameNode metrics system started
     [exec] 2012-10-31 12:51:47,376 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list
     [exec] 2012-10-31 12:51:47,376 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000
     [exec] 2012-10-31 12:51:47,390 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication         = 1
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication             = 512
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication             = 1
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams      = 2
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks  = false
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000
     [exec] 2012-10-31 12:51:47,391 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer        = false
     [exec] 2012-10-31 12:51:47,391 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-10-31 12:51:47,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup          = supergroup
     [exec] 2012-10-31 12:51:47,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true
     [exec] 2012-10-31 12:51:47,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false
     [exec] 2012-10-31 12:51:47,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true
     [exec] 2012-10-31 12:51:47,393 INFO  namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times
     [exec] 2012-10-31 12:51:47,393 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-10-31 12:51:47,393 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-10-31 12:51:47,393 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension     = 0
     [exec] 2012-10-31 12:51:47,397 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/in_use.lock> acquired by nodename 26703@asf005.sp2.ygridcore.net
     [exec] 2012-10-31 12:51:47,403 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/in_use.lock> acquired by nodename 26703@asf005.sp2.ygridcore.net
     [exec] 2012-10-31 12:51:47,407 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current>
     [exec] 2012-10-31 12:51:47,407 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current>
     [exec] 2012-10-31 12:51:47,408 INFO  namenode.FSImage (FSImage.java:loadFSImage(611)) - No edit log streams selected.
     [exec] 2012-10-31 12:51:47,416 INFO  namenode.FSImage (FSImageFormat.java:load(167)) - Loading image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000> using no compression
     [exec] 2012-10-31 12:51:47,416 INFO  namenode.FSImage (FSImageFormat.java:load(170)) - Number of files = 1
     [exec] 2012-10-31 12:51:47,417 INFO  namenode.FSImage (FSImageFormat.java:loadFilesUnderConstruction(358)) - Number of files under construction = 0
     [exec] 2012-10-31 12:51:47,417 INFO  namenode.FSImage (FSImageFormat.java:load(192)) - Image file of size 122 loaded in 0 seconds.
     [exec] 2012-10-31 12:51:47,417 INFO  namenode.FSImage (FSImage.java:loadFSImage(754)) - Loaded image for txid 0 from <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000>
     [exec] 2012-10-31 12:51:47,422 INFO  namenode.FSEditLog (FSEditLog.java:startLogSegment(949)) - Starting log segment at 1
     [exec] 2012-10-31 12:51:47,602 INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
     [exec] 2012-10-31 12:51:47,602 INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(441)) - Finished loading FSImage in 209 msecs
     [exec] 2012-10-31 12:51:47,731 INFO  ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 52090
     [exec] 2012-10-31 12:51:47,752 INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4615)) - Registered FSNamesystemState MBean
     [exec] 2012-10-31 12:51:47,767 INFO  namenode.FSNamesystem (FSNamesystem.java:getCompleteBlocksTotal(4307)) - Number of blocks under construction: 0
     [exec] 2012-10-31 12:51:47,767 INFO  namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(3858)) - initializing replication queues
     [exec] 2012-10-31 12:51:47,780 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2205)) - Total number of blocks            = 0
     [exec] 2012-10-31 12:51:47,780 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2206)) - Number of invalid blocks          = 0
     [exec] 2012-10-31 12:51:47,780 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2207)) - Number of under-replicated blocks = 0
     [exec] 2012-10-31 12:51:47,780 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2208)) - Number of  over-replicated blocks = 0
     [exec] 2012-10-31 12:51:47,780 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2210)) - Number of blocks being written    = 0
     [exec] 2012-10-31 12:51:47,780 INFO  hdfs.StateChange (FSNamesystem.java:initializeReplQueues(3863)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 13 msec
     [exec] 2012-10-31 12:51:47,780 INFO  hdfs.StateChange (FSNamesystem.java:leave(3835)) - STATE* Leaving safe mode after 0 secs
     [exec] 2012-10-31 12:51:47,781 INFO  hdfs.StateChange (FSNamesystem.java:leave(3845)) - STATE* Network topology has 0 racks and 0 datanodes
     [exec] 2012-10-31 12:51:47,781 INFO  hdfs.StateChange (FSNamesystem.java:leave(3848)) - STATE* UnderReplicatedBlocks has 0 blocks
     [exec] 2012-10-31 12:51:47,833 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
     [exec] 2012-10-31 12:51:47,882 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-10-31 12:51:47,883 INFO  http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
     [exec] 2012-10-31 12:51:47,884 INFO  http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2012-10-31 12:51:47,887 INFO  http.HttpServer (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false
     [exec] 2012-10-31 12:51:47,893 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 60078
     [exec] 2012-10-31 12:51:47,893 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-10-31 12:51:48,056 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:60078
     [exec] 2012-10-31 12:51:48,056 INFO  namenode.NameNode (NameNode.java:setHttpServerAddress(395)) - Web-server up at: localhost:60078
     [exec] 2012-10-31 12:51:48,057 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener on 52090: starting
     [exec] 2012-10-31 12:51:48,057 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder: starting
     [exec] 2012-10-31 12:51:48,059 INFO  namenode.NameNode (NameNode.java:startCommonServices(492)) - NameNode RPC up at: localhost/127.0.0.1:52090
     [exec] 2012-10-31 12:51:48,059 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(647)) - Starting services required for active state
     [exec] 2012-10-31 12:51:48,062 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1145)) - Starting DataNode 0 with dfs.datanode.data.dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1,file>:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2>
     [exec] 2012-10-31 12:51:48,078 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     [exec] 2012-10-31 12:51:48,089 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - DataNode metrics system started (again)
     [exec] 2012-10-31 12:51:48,089 INFO  datanode.DataNode (DataNode.java:<init>(313)) - Configured hostname is 127.0.0.1
     [exec] 2012-10-31 12:51:48,094 INFO  datanode.DataNode (DataNode.java:initDataXceiver(539)) - Opened streaming server at /127.0.0.1:34776
     [exec] 2012-10-31 12:51:48,096 INFO  datanode.DataNode (DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
     [exec] 2012-10-31 12:51:48,097 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-10-31 12:51:48,098 INFO  http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2012-10-31 12:51:48,098 INFO  http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2012-10-31 12:51:48,099 INFO  datanode.DataNode (DataNode.java:startInfoServer(365)) - Opened info server at localhost:0
     [exec] 2012-10-31 12:51:48,101 INFO  datanode.DataNode (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false
     [exec] 2012-10-31 12:51:48,101 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 53237
     [exec] 2012-10-31 12:51:48,101 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-10-31 12:51:48,153 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:53237
     [exec] 2012-10-31 12:51:48,159 INFO  ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 47335
     [exec] 2012-10-31 12:51:48,164 INFO  datanode.DataNode (DataNode.java:initIpcServer(436)) - Opened IPC server at /127.0.0.1:47335
     [exec] 2012-10-31 12:51:48,171 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(148)) - Refresh request received for nameservices: null
     [exec] 2012-10-31 12:51:48,173 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(193)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2012-10-31 12:51:48,180 INFO  datanode.DataNode (BPServiceActor.java:run(658)) - Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:52090 starting to offer service
     [exec] 2012-10-31 12:51:48,184 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder: starting
     [exec] 2012-10-31 12:51:48,184 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener on 47335: starting
     [exec] 2012-10-31 12:51:48,616 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 26703@asf005.sp2.ygridcore.net
     [exec] 2012-10-31 12:51:48,617 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted
     [exec] 2012-10-31 12:51:48,617 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ...
     [exec] 2012-10-31 12:51:48,620 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 26703@asf005.sp2.ygridcore.net
     [exec] 2012-10-31 12:51:48,620 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted
     [exec] 2012-10-31 12:51:48,621 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ...
     [exec] 2012-10-31 12:51:48,656 INFO  common.Storage (Storage.java:lock(626)) - Locking is disabled
     [exec] 2012-10-31 12:51:48,656 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-380395973-67.195.138.27-1351687906118> is not formatted.
     [exec] 2012-10-31 12:51:48,656 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ...
     [exec] 2012-10-31 12:51:48,656 INFO  common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-380395973-67.195.138.27-1351687906118 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-380395973-67.195.138.27-1351687906118/current>
     [exec] 2012-10-31 12:51:48,658 INFO  common.Storage (Storage.java:lock(626)) - Locking is disabled
     [exec] 2012-10-31 12:51:48,659 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-380395973-67.195.138.27-1351687906118> is not formatted.
     [exec] 2012-10-31 12:51:48,659 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ...
     [exec] 2012-10-31 12:51:48,659 INFO  common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-380395973-67.195.138.27-1351687906118 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-380395973-67.195.138.27-1351687906118/current>
     [exec] 2012-10-31 12:51:48,662 INFO  datanode.DataNode (DataNode.java:initStorage(852)) - Setting up storage: nsid=273544147;bpid=BP-380395973-67.195.138.27-1351687906118;lv=-40;nsInfo=lv=-40;cid=testClusterID;nsid=273544147;c=0;bpid=BP-380395973-67.195.138.27-1351687906118
     [exec] 2012-10-31 12:51:48,672 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>
     [exec] 2012-10-31 12:51:48,672 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>
     [exec] 2012-10-31 12:51:48,678 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(1209)) - Registered FSDatasetState MBean
     [exec] 2012-10-31 12:51:48,682 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(243)) - Periodic Directory Tree Verification scan starting at 1351693685682 with interval 21600000
     [exec] 2012-10-31 12:51:48,683 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(1577)) - Adding block pool BP-380395973-67.195.138.27-1351687906118
     [exec] 2012-10-31 12:51:48,689 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1833)) - Waiting for cluster to become active
     [exec] 2012-10-31 12:51:48,691 INFO  datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 beginning handshake with NN
     [exec] 2012-10-31 12:51:48,693 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-1928486877-67.195.138.27-34776-1351687908623, infoPort=53237, ipcPort=47335, storageInfo=lv=-40;cid=testClusterID;nsid=273544147;c=0) storage DS-1928486877-67.195.138.27-34776-1351687908623
     [exec] 2012-10-31 12:51:48,695 INFO  net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:34776
     [exec] 2012-10-31 12:51:48,697 INFO  datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 successfully registered with NN
     [exec] 2012-10-31 12:51:48,697 INFO  datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:52090 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2012-10-31 12:51:48,700 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090 trying to claim ACTIVE state with txid=1
     [exec] 2012-10-31 12:51:48,701 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-380395973-67.195.138.27-1351687906118 (storage id DS-1928486877-67.195.138.27-34776-1351687908623) service to localhost/127.0.0.1:52090
     [exec] 2012-10-31 12:51:48,704 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:34776 after becoming active. Its block contents are no longer considered stale
     [exec] 2012-10-31 12:51:48,705 INFO  hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-1928486877-67.195.138.27-34776-1351687908623, infoPort=53237, ipcPort=47335, storageInfo=lv=-40;cid=testClusterID;nsid=273544147;c=0), blocks: 0, processing time: 2 msecs
     [exec] 2012-10-31 12:51:48,706 INFO  datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 0 msec to generate and 5 msecs for RPC and NN processing
     [exec] 2012-10-31 12:51:48,706 INFO  datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@189d7eb
     [exec] 2012-10-31 12:51:48,708 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-380395973-67.195.138.27-1351687906118
     [exec] 2012-10-31 12:51:48,712 INFO  datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-380395973-67.195.138.27-1351687906118 to blockPoolScannerMap, new size=1
     [exec] 2012-10-31 12:51:48,794 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6ee5b68, pid=26703, tid=4137092816
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid26703.log>
     [exec] Aborted
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:35.682s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:36.461s
[INFO] Finished at: Wed Oct 31 12:51:49 UTC 2012
[INFO] Final Memory: 23M/350M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4129
Updating HADOOP-8994
Updating HDFS-3573
Updating HDFS-3789
Updating HADOOP-8986
Updating HDFS-3916
Updating HDFS-3695
Updating HDFS-3809

Hadoop-Hdfs-trunk - Build # 1213 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1213/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11481 lines...]
     [exec] 2012-11-01 12:51:31,815 INFO  datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:40223 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223 trying to claim ACTIVE state with txid=1
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223
     [exec] 2012-11-01 12:51:31,823 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:45280 after becoming active. Its block contents are no longer considered stale
     [exec] 2012-11-01 12:51:31,824 INFO  hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-956326259-67.195.138.27-45280-1351774291735, infoPort=55286, ipcPort=42421, storageInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0), blocks: 0, processing time: 2 msecs
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@1277a30
     [exec] 2012-11-01 12:51:31,827 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,831 INFO  datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-1372242316-67.195.138.27-1351774289159 to blockPoolScannerMap, new size=1
     [exec] Aborted
     [exec] 2012-11-01 12:51:31,913 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6ee9b68, pid=14319, tid=4137109200
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14319.log
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:17.144s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:17.929s
[INFO] Finished at: Thu Nov 01 12:51:32 UTC 2012
[INFO] Final Memory: 26M/491M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4752
Updating MAPREDUCE-4724
Updating YARN-165
Updating YARN-166
Updating YARN-189
Updating YARN-159
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1214 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1214/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11478 lines...]
     [exec] 2012-11-02 12:51:17,207 INFO  datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 beginning handshake with NN
     [exec] 2012-11-02 12:51:17,209 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0) storage DS-743789385-67.195.138.27-45299-1351860677132
     [exec] 2012-11-02 12:51:17,212 INFO  net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:45299
     [exec] 2012-11-02 12:51:17,213 INFO  datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 successfully registered with NN
     [exec] 2012-11-02 12:51:17,213 INFO  datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:37009 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2012-11-02 12:51:17,217 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 trying to claim ACTIVE state with txid=1
     [exec] 2012-11-02 12:51:17,217 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009
     [exec] 2012-11-02 12:51:17,221 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:45299 after becoming active. Its block contents are no longer considered stale
     [exec] 2012-11-02 12:51:17,222 INFO  hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0), blocks: 0, processing time: 1 msecs
     [exec] 2012-11-02 12:51:17,223 INFO  datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing
     [exec] 2012-11-02 12:51:17,223 INFO  datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@19ccb73
     [exec] 2012-11-02 12:51:17,225 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-714465857-67.195.138.27-1351860674434
     [exec] 2012-11-02 12:51:17,229 INFO  datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-714465857-67.195.138.27-1351860674434 to blockPoolScannerMap, new size=1
     [exec] 2012-11-02 12:51:17,311 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6e97b68, pid=19749, tid=4136773328
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid19749.log
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
     [exec] Aborted
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:17:35.336s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:17:36.108s
[INFO] Finished at: Fri Nov 02 12:51:17 UTC 2012
[INFO] Final Memory: 18M/478M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4729
Updating MAPREDUCE-4746
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1215 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1215/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11353 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.966 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.999 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.177 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.845 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.814 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.887 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.189 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.124 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.021 sec

Results :

Tests in error: 
  testBlockCorruptionRecoveryPolicy2(org.apache.hadoop.hdfs.TestDatanodeBlockScanner): Timed out waiting for /tmp/testBlockCorruptRecovery/file to reach 3 replicas

Tests run: 1610, Failures: 0, Errors: 1, Skipped: 4

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:17:42.372s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:17:43.148s
[INFO] Finished at: Sat Nov 03 12:51:23 UTC 2012
[INFO] Final Memory: 17M/447M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-3804
Updating MAPREDUCE-4763
Updating HDFS-4132
Updating HDFS-4143
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Jenkins build is back to normal : Hadoop-Hdfs-trunk #1216

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1216/>


Build failed in Jenkins: Hadoop-Hdfs-trunk #1215

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1215/changes>

Changes:

[szetszwo] HDFS-4143. Change blocks to private in INodeFile and renames isLink() to isSymlink() in INode.

[todd] HDFS-4132. When libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory. Contributed by Colin Patrick McCabe.

[bobby] MAPREDUCE-4763 repair test TestUmbilicalProtocolWithJobToken (Ivan A. Veselovsky via bobby)

[daryn] HDFS-3804.  TestHftpFileSystem fails intermittently with JDK7 (Trevor Robinson via daryn)

------------------------------------------
[...truncated 11160 lines...]
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.139 sec
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.315 sec
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.992 sec
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.353 sec
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.88 sec
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.958 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.663 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.066 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.784 sec
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.571 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.414 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.673 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.087 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.318 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.604 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.801 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.029 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 118.953 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.94 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.527 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.85 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.142 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.755 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.762 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.488 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.208 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.801 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.929 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.067 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.907 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.707 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.7 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.268 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.46 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.398 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.587 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.036 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.103 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.413 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.049 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.832 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 88.404 sec <<< FAILURE!
testBlockCorruptionRecoveryPolicy2(org.apache.hadoop.hdfs.TestDatanodeBlockScanner)  Time elapsed: 46282 sec  <<< ERROR!
java.util.concurrent.TimeoutException: Timed out waiting for /tmp/testBlockCorruptRecovery/file to reach 3 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:550)
	at org.apache.hadoop.hdfs.TestDatanodeBlockScanner.blockCorruptionRecoveryPolicy(TestDatanodeBlockScanner.java:323)
	at org.apache.hadoop.hdfs.TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy2(TestDatanodeBlockScanner.java:260)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.462 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.707 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.351 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.435 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.13 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.886 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.91 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.984 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.541 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.359 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.225 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.553 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.201 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.586 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.385 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.427 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.912 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.35 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.897 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.003 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.298 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.098 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.124 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.641 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.416 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.966 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.999 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.177 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.845 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.814 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.887 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.189 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.124 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.021 sec

Results :

Tests in error: 
  testBlockCorruptionRecoveryPolicy2(org.apache.hadoop.hdfs.TestDatanodeBlockScanner): Timed out waiting for /tmp/testBlockCorruptRecovery/file to reach 3 replicas

Tests run: 1610, Failures: 0, Errors: 1, Skipped: 4

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:17:42.372s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:17:43.148s
[INFO] Finished at: Sat Nov 03 12:51:23 UTC 2012
[INFO] Final Memory: 17M/447M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-3804
Updating MAPREDUCE-4763
Updating HDFS-4132
Updating HDFS-4143

Build failed in Jenkins: Hadoop-Hdfs-trunk #1214

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1214/changes>

Changes:

[jlowe] MAPREDUCE-4729. job history UI not showing all job attempts. Contributed by Vinod Kumar Vavilapalli

[bobby] MAPREDUCE-4746. The MR Application Master does not have a config to set environment variables (Rob Parker via bobby)

------------------------------------------
[...truncated 11285 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.099 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.596 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.6 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.804 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.907 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.037 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.034 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.08 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.294 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.283 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.288 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.637 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.337 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.782 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.154 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.056 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.941 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.314 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.05 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.791 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.279 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.337 sec

Results :

Tests run: 1610, Failures: 0, Errors: 0, Skipped: 4

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (native_tests) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
     [exec] 2012-11-02 12:51:13,929 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(319)) - starting cluster with 1 namenodes.
     [exec] Formatting using clusterid: testClusterID
     [exec] 2012-11-02 12:51:14,188 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list
     [exec] 2012-11-02 12:51:14,189 WARN  conf.Configuration (Configuration.java:warnOnceIfDeprecated(823)) - hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
     [exec] 2012-11-02 12:51:14,190 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication         = 1
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication             = 512
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication             = 1
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams      = 2
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks  = false
     [exec] 2012-11-02 12:51:14,210 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000
     [exec] 2012-11-02 12:51:14,211 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer        = false
     [exec] 2012-11-02 12:51:14,211 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-11-02 12:51:14,211 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup          = supergroup
     [exec] 2012-11-02 12:51:14,211 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true
     [exec] 2012-11-02 12:51:14,211 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false
     [exec] 2012-11-02 12:51:14,216 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true
     [exec] 2012-11-02 12:51:14,423 INFO  namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times
     [exec] 2012-11-02 12:51:14,425 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-11-02 12:51:14,425 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-11-02 12:51:14,425 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension     = 0
     [exec] 2012-11-02 12:51:15,526 INFO  common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1> has been successfully formatted.
     [exec] 2012-11-02 12:51:15,532 INFO  common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2> has been successfully formatted.
     [exec] 2012-11-02 12:51:15,543 INFO  namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000> using no compression
     [exec] 2012-11-02 12:51:15,543 INFO  namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000> using no compression
     [exec] 2012-11-02 12:51:15,553 INFO  namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds.
     [exec] 2012-11-02 12:51:15,556 INFO  namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds.
     [exec] 2012-11-02 12:51:15,569 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(171)) - Going to retain 1 images with txid >= 0
     [exec] 2012-11-02 12:51:15,617 WARN  impl.MetricsConfig (MetricsConfig.java:loadFirst(123)) - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
     [exec] 2012-11-02 12:51:15,672 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(341)) - Scheduled snapshot period at 10 second(s).
     [exec] 2012-11-02 12:51:15,672 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) - NameNode metrics system started
     [exec] 2012-11-02 12:51:15,685 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list
     [exec] 2012-11-02 12:51:15,685 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000
     [exec] 2012-11-02 12:51:15,699 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false
     [exec] 2012-11-02 12:51:15,699 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication         = 1
     [exec] 2012-11-02 12:51:15,700 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication             = 512
     [exec] 2012-11-02 12:51:15,700 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication             = 1
     [exec] 2012-11-02 12:51:15,700 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams      = 2
     [exec] 2012-11-02 12:51:15,700 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks  = false
     [exec] 2012-11-02 12:51:15,700 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000
     [exec] 2012-11-02 12:51:15,700 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer        = false
     [exec] 2012-11-02 12:51:15,700 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-11-02 12:51:15,700 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup          = supergroup
     [exec] 2012-11-02 12:51:15,700 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true
     [exec] 2012-11-02 12:51:15,701 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false
     [exec] 2012-11-02 12:51:15,701 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true
     [exec] 2012-11-02 12:51:15,701 INFO  namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times
     [exec] 2012-11-02 12:51:15,702 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-11-02 12:51:15,702 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-11-02 12:51:15,702 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension     = 0
     [exec] 2012-11-02 12:51:15,707 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/in_use.lock> acquired by nodename 19749@asf005.sp2.ygridcore.net
     [exec] 2012-11-02 12:51:15,712 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/in_use.lock> acquired by nodename 19749@asf005.sp2.ygridcore.net
     [exec] 2012-11-02 12:51:15,715 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current>
     [exec] 2012-11-02 12:51:15,715 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current>
     [exec] 2012-11-02 12:51:15,716 INFO  namenode.FSImage (FSImage.java:loadFSImage(611)) - No edit log streams selected.
     [exec] 2012-11-02 12:51:15,718 INFO  namenode.FSImage (FSImageFormat.java:load(167)) - Loading image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000> using no compression
     [exec] 2012-11-02 12:51:15,718 INFO  namenode.FSImage (FSImageFormat.java:load(170)) - Number of files = 1
     [exec] 2012-11-02 12:51:15,719 INFO  namenode.FSImage (FSImageFormat.java:loadFilesUnderConstruction(358)) - Number of files under construction = 0
     [exec] 2012-11-02 12:51:15,719 INFO  namenode.FSImage (FSImageFormat.java:load(192)) - Image file of size 122 loaded in 0 seconds.
     [exec] 2012-11-02 12:51:15,719 INFO  namenode.FSImage (FSImage.java:loadFSImage(754)) - Loaded image for txid 0 from <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000>
     [exec] 2012-11-02 12:51:15,723 INFO  namenode.FSEditLog (FSEditLog.java:startLogSegment(949)) - Starting log segment at 1
     [exec] 2012-11-02 12:51:16,034 INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
     [exec] 2012-11-02 12:51:16,035 INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(441)) - Finished loading FSImage in 333 msecs
     [exec] 2012-11-02 12:51:16,165 INFO  ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 37009
     [exec] 2012-11-02 12:51:16,186 INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4615)) - Registered FSNamesystemState MBean
     [exec] 2012-11-02 12:51:16,200 INFO  namenode.FSNamesystem (FSNamesystem.java:getCompleteBlocksTotal(4307)) - Number of blocks under construction: 0
     [exec] 2012-11-02 12:51:16,201 INFO  namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(3858)) - initializing replication queues
     [exec] 2012-11-02 12:51:16,212 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2205)) - Total number of blocks            = 0
     [exec] 2012-11-02 12:51:16,212 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2206)) - Number of invalid blocks          = 0
     [exec] 2012-11-02 12:51:16,213 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2207)) - Number of under-replicated blocks = 0
     [exec] 2012-11-02 12:51:16,213 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2208)) - Number of  over-replicated blocks = 0
     [exec] 2012-11-02 12:51:16,213 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2210)) - Number of blocks being written    = 0
     [exec] 2012-11-02 12:51:16,213 INFO  hdfs.StateChange (FSNamesystem.java:initializeReplQueues(3863)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 12 msec
     [exec] 2012-11-02 12:51:16,213 INFO  hdfs.StateChange (FSNamesystem.java:leave(3835)) - STATE* Leaving safe mode after 0 secs
     [exec] 2012-11-02 12:51:16,213 INFO  hdfs.StateChange (FSNamesystem.java:leave(3845)) - STATE* Network topology has 0 racks and 0 datanodes
     [exec] 2012-11-02 12:51:16,213 INFO  hdfs.StateChange (FSNamesystem.java:leave(3848)) - STATE* UnderReplicatedBlocks has 0 blocks
     [exec] 2012-11-02 12:51:16,266 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
     [exec] 2012-11-02 12:51:16,321 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-11-02 12:51:16,323 INFO  http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
     [exec] 2012-11-02 12:51:16,323 INFO  http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2012-11-02 12:51:16,326 INFO  http.HttpServer (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false
     [exec] 2012-11-02 12:51:16,332 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 45522
     [exec] 2012-11-02 12:51:16,332 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-11-02 12:51:16,490 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:45522
     [exec] 2012-11-02 12:51:16,491 INFO  namenode.NameNode (NameNode.java:setHttpServerAddress(395)) - Web-server up at: localhost:45522
     [exec] 2012-11-02 12:51:16,491 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder: starting
     [exec] 2012-11-02 12:51:16,491 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener on 37009: starting
     [exec] 2012-11-02 12:51:16,494 INFO  namenode.NameNode (NameNode.java:startCommonServices(492)) - NameNode RPC up at: localhost/127.0.0.1:37009
     [exec] 2012-11-02 12:51:16,494 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(647)) - Starting services required for active state
     [exec] 2012-11-02 12:51:16,496 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1145)) - Starting DataNode 0 with dfs.datanode.data.dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1,file>:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2>
     [exec] 2012-11-02 12:51:16,513 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     [exec] 2012-11-02 12:51:16,523 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - DataNode metrics system started (again)
     [exec] 2012-11-02 12:51:16,524 INFO  datanode.DataNode (DataNode.java:<init>(313)) - Configured hostname is 127.0.0.1
     [exec] 2012-11-02 12:51:16,529 INFO  datanode.DataNode (DataNode.java:initDataXceiver(539)) - Opened streaming server at /127.0.0.1:45299
     [exec] 2012-11-02 12:51:16,531 INFO  datanode.DataNode (DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
     [exec] 2012-11-02 12:51:16,532 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-11-02 12:51:16,532 INFO  http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2012-11-02 12:51:16,533 INFO  http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2012-11-02 12:51:16,534 INFO  datanode.DataNode (DataNode.java:startInfoServer(365)) - Opened info server at localhost:0
     [exec] 2012-11-02 12:51:16,536 INFO  datanode.DataNode (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false
     [exec] 2012-11-02 12:51:16,536 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 44178
     [exec] 2012-11-02 12:51:16,536 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-11-02 12:51:16,676 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:44178
     [exec] 2012-11-02 12:51:16,683 INFO  ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 53374
     [exec] 2012-11-02 12:51:16,688 INFO  datanode.DataNode (DataNode.java:initIpcServer(436)) - Opened IPC server at /127.0.0.1:53374
     [exec] 2012-11-02 12:51:16,695 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(148)) - Refresh request received for nameservices: null
     [exec] 2012-11-02 12:51:16,698 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(193)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2012-11-02 12:51:16,705 INFO  datanode.DataNode (BPServiceActor.java:run(658)) - Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:37009 starting to offer service
     [exec] 2012-11-02 12:51:16,709 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder: starting
     [exec] 2012-11-02 12:51:16,709 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener on 53374: starting
     [exec] 2012-11-02 12:51:17,124 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 19749@asf005.sp2.ygridcore.net
     [exec] 2012-11-02 12:51:17,125 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted
     [exec] 2012-11-02 12:51:17,125 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ...
     [exec] 2012-11-02 12:51:17,130 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 19749@asf005.sp2.ygridcore.net
     [exec] 2012-11-02 12:51:17,130 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted
     [exec] 2012-11-02 12:51:17,130 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ...
     [exec] 2012-11-02 12:51:17,167 INFO  common.Storage (Storage.java:lock(626)) - Locking is disabled
     [exec] 2012-11-02 12:51:17,167 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-714465857-67.195.138.27-1351860674434> is not formatted.
     [exec] 2012-11-02 12:51:17,167 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ...
     [exec] 2012-11-02 12:51:17,167 INFO  common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-714465857-67.195.138.27-1351860674434 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-714465857-67.195.138.27-1351860674434/current>
     [exec] 2012-11-02 12:51:17,169 INFO  common.Storage (Storage.java:lock(626)) - Locking is disabled
     [exec] 2012-11-02 12:51:17,170 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-714465857-67.195.138.27-1351860674434> is not formatted.
     [exec] 2012-11-02 12:51:17,170 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ...
     [exec] 2012-11-02 12:51:17,170 INFO  common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-714465857-67.195.138.27-1351860674434 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-714465857-67.195.138.27-1351860674434/current>
     [exec] 2012-11-02 12:51:17,173 INFO  datanode.DataNode (DataNode.java:initStorage(852)) - Setting up storage: nsid=71175640;bpid=BP-714465857-67.195.138.27-1351860674434;lv=-40;nsInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0;bpid=BP-714465857-67.195.138.27-1351860674434
     [exec] 2012-11-02 12:51:17,183 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>
     [exec] 2012-11-02 12:51:17,183 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>
     [exec] 2012-11-02 12:51:17,194 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(1209)) - Registered FSDatasetState MBean
     [exec] 2012-11-02 12:51:17,198 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(243)) - Periodic Directory Tree Verification scan starting at 1351879520198 with interval 21600000
     [exec] 2012-11-02 12:51:17,199 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(1577)) - Adding block pool BP-714465857-67.195.138.27-1351860674434
     [exec] 2012-11-02 12:51:17,206 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1833)) - Waiting for cluster to become active
     [exec] 2012-11-02 12:51:17,207 INFO  datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 beginning handshake with NN
     [exec] 2012-11-02 12:51:17,209 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0) storage DS-743789385-67.195.138.27-45299-1351860677132
     [exec] 2012-11-02 12:51:17,212 INFO  net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:45299
     [exec] 2012-11-02 12:51:17,213 INFO  datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 successfully registered with NN
     [exec] 2012-11-02 12:51:17,213 INFO  datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:37009 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2012-11-02 12:51:17,217 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 trying to claim ACTIVE state with txid=1
     [exec] 2012-11-02 12:51:17,217 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009
     [exec] 2012-11-02 12:51:17,221 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:45299 after becoming active. Its block contents are no longer considered stale
     [exec] 2012-11-02 12:51:17,222 INFO  hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0), blocks: 0, processing time: 1 msecs
     [exec] 2012-11-02 12:51:17,223 INFO  datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing
     [exec] 2012-11-02 12:51:17,223 INFO  datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@19ccb73
     [exec] 2012-11-02 12:51:17,225 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-714465857-67.195.138.27-1351860674434
     [exec] 2012-11-02 12:51:17,229 INFO  datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-714465857-67.195.138.27-1351860674434 to blockPoolScannerMap, new size=1
     [exec] 2012-11-02 12:51:17,311 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6e97b68, pid=19749, tid=4136773328
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid19749.log>
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
     [exec] Aborted
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:17:35.336s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:17:36.108s
[INFO] Finished at: Fri Nov 02 12:51:17 UTC 2012
[INFO] Final Memory: 18M/478M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4729
Updating MAPREDUCE-4746

Build failed in Jenkins: Hadoop-Hdfs-trunk #1213

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1213/changes>

Changes:

[vinodkv] YARN-189. Fixed a deadlock between RM's ApplicationMasterService and the dispatcher. Contributed by Thomas Graves.

[bobby] MAPREDUCE-4724. job history web ui applications page should be sorted to display last app first (tgraves via bobby)

[bobby] YARN-166. capacity scheduler doesn't allow capacity < 1.0 (tgraves via bobby)

[bobby] YARN-159. RM web ui applications page should be sorted to display last app first (tgraves via bobby)

[bobby] YARN-165. RM should point tracking URL to RM web page for app when AM fails (jlowe via bobby)

[tgraves] MAPREDUCE-4752. Reduce MR AM memory usage through String Interning (Robert Evans via tgraves)

------------------------------------------
[...truncated 11288 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.58 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.32 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.453 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.911 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.27 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.169 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.068 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.161 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.088 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.066 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.692 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.392 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.865 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.273 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.696 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.838 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.003 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.205 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.14 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.046 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.368 sec

Results :

Tests run: 1610, Failures: 0, Errors: 0, Skipped: 4

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (native_tests) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
     [exec] 2012-11-01 12:51:28,657 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(319)) - starting cluster with 1 namenodes.
     [exec] Formatting using clusterid: testClusterID
     [exec] 2012-11-01 12:51:28,913 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list
     [exec] 2012-11-01 12:51:28,914 WARN  conf.Configuration (Configuration.java:warnOnceIfDeprecated(823)) - hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
     [exec] 2012-11-01 12:51:28,914 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication         = 1
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication             = 512
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication             = 1
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams      = 2
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks  = false
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000
     [exec] 2012-11-01 12:51:28,936 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer        = false
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup          = supergroup
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false
     [exec] 2012-11-01 12:51:28,941 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true
     [exec] 2012-11-01 12:51:29,149 INFO  namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times
     [exec] 2012-11-01 12:51:29,150 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-11-01 12:51:29,150 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-11-01 12:51:29,151 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension     = 0
     [exec] 2012-11-01 12:51:30,210 INFO  common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1> has been successfully formatted.
     [exec] 2012-11-01 12:51:30,218 INFO  common.Storage (NNStorage.java:format(525)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2> has been successfully formatted.
     [exec] 2012-11-01 12:51:30,229 INFO  namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000> using no compression
     [exec] 2012-11-01 12:51:30,229 INFO  namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000> using no compression
     [exec] 2012-11-01 12:51:30,239 INFO  namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds.
     [exec] 2012-11-01 12:51:30,243 INFO  namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds.
     [exec] 2012-11-01 12:51:30,259 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(171)) - Going to retain 1 images with txid >= 0
     [exec] 2012-11-01 12:51:30,307 WARN  impl.MetricsConfig (MetricsConfig.java:loadFirst(123)) - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
     [exec] 2012-11-01 12:51:30,362 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(341)) - Scheduled snapshot period at 10 second(s).
     [exec] 2012-11-01 12:51:30,362 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) - NameNode metrics system started
     [exec] 2012-11-01 12:51:30,375 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list
     [exec] 2012-11-01 12:51:30,375 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188)) - dfs.block.invalidate.limit=1000
     [exec] 2012-11-01 12:51:30,389 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false
     [exec] 2012-11-01 12:51:30,389 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280)) - defaultReplication         = 1
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281)) - maxReplication             = 512
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282)) - minReplication             = 1
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283)) - maxReplicationStreams      = 2
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284)) - shouldCheckForEnoughRacks  = false
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285)) - replicationRecheckInterval = 3000
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286)) - encryptDataTransfer        = false
     [exec] 2012-11-01 12:51:30,390 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473)) - fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-11-01 12:51:30,390 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474)) - supergroup          = supergroup
     [exec] 2012-11-01 12:51:30,390 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475)) - isPermissionEnabled = true
     [exec] 2012-11-01 12:51:30,391 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489)) - HA Enabled: false
     [exec] 2012-11-01 12:51:30,391 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521)) - Append Enabled: true
     [exec] 2012-11-01 12:51:30,391 INFO  namenode.NameNode (FSDirectory.java:<init>(143)) - Caching file names occuring more than 10 times
     [exec] 2012-11-01 12:51:30,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-11-01 12:51:30,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754)) - dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-11-01 12:51:30,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755)) - dfs.namenode.safemode.extension     = 0
     [exec] 2012-11-01 12:51:30,397 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/in_use.lock> acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:30,400 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/in_use.lock> acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:30,404 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current>
     [exec] 2012-11-01 12:51:30,404 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current>
     [exec] 2012-11-01 12:51:30,405 INFO  namenode.FSImage (FSImage.java:loadFSImage(611)) - No edit log streams selected.
     [exec] 2012-11-01 12:51:30,407 INFO  namenode.FSImage (FSImageFormat.java:load(167)) - Loading image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000> using no compression
     [exec] 2012-11-01 12:51:30,407 INFO  namenode.FSImage (FSImageFormat.java:load(170)) - Number of files = 1
     [exec] 2012-11-01 12:51:30,407 INFO  namenode.FSImage (FSImageFormat.java:loadFilesUnderConstruction(358)) - Number of files under construction = 0
     [exec] 2012-11-01 12:51:30,408 INFO  namenode.FSImage (FSImageFormat.java:load(192)) - Image file of size 122 loaded in 0 seconds.
     [exec] 2012-11-01 12:51:30,408 INFO  namenode.FSImage (FSImage.java:loadFSImage(754)) - Loaded image for txid 0 from <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000>
     [exec] 2012-11-01 12:51:30,412 INFO  namenode.FSEditLog (FSEditLog.java:startLogSegment(949)) - Starting log segment at 1
     [exec] 2012-11-01 12:51:30,632 INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
     [exec] 2012-11-01 12:51:30,632 INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(441)) - Finished loading FSImage in 240 msecs
     [exec] 2012-11-01 12:51:30,761 INFO  ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 40223
     [exec] 2012-11-01 12:51:30,781 INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4615)) - Registered FSNamesystemState MBean
     [exec] 2012-11-01 12:51:30,796 INFO  namenode.FSNamesystem (FSNamesystem.java:getCompleteBlocksTotal(4307)) - Number of blocks under construction: 0
     [exec] 2012-11-01 12:51:30,796 INFO  namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(3858)) - initializing replication queues
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2205)) - Total number of blocks            = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2206)) - Number of invalid blocks          = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2207)) - Number of under-replicated blocks = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2208)) - Number of  over-replicated blocks = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2210)) - Number of blocks being written    = 0
     [exec] 2012-11-01 12:51:30,808 INFO  hdfs.StateChange (FSNamesystem.java:initializeReplQueues(3863)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 12 msec
     [exec] 2012-11-01 12:51:30,808 INFO  hdfs.StateChange (FSNamesystem.java:leave(3835)) - STATE* Leaving safe mode after 0 secs
     [exec] 2012-11-01 12:51:30,809 INFO  hdfs.StateChange (FSNamesystem.java:leave(3845)) - STATE* Network topology has 0 racks and 0 datanodes
     [exec] 2012-11-01 12:51:30,809 INFO  hdfs.StateChange (FSNamesystem.java:leave(3848)) - STATE* UnderReplicatedBlocks has 0 blocks
     [exec] 2012-11-01 12:51:30,861 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
     [exec] 2012-11-01 12:51:30,916 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-11-01 12:51:30,918 INFO  http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
     [exec] 2012-11-01 12:51:30,918 INFO  http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2012-11-01 12:51:30,921 INFO  http.HttpServer (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false
     [exec] 2012-11-01 12:51:30,928 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 59969
     [exec] 2012-11-01 12:51:30,928 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-11-01 12:51:31,086 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:59969
     [exec] 2012-11-01 12:51:31,086 INFO  namenode.NameNode (NameNode.java:setHttpServerAddress(395)) - Web-server up at: localhost:59969
     [exec] 2012-11-01 12:51:31,086 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener on 40223: starting
     [exec] 2012-11-01 12:51:31,086 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder: starting
     [exec] 2012-11-01 12:51:31,089 INFO  namenode.NameNode (NameNode.java:startCommonServices(492)) - NameNode RPC up at: localhost/127.0.0.1:40223
     [exec] 2012-11-01 12:51:31,089 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(647)) - Starting services required for active state
     [exec] 2012-11-01 12:51:31,091 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1145)) - Starting DataNode 0 with dfs.datanode.data.dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1,file>:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2>
     [exec] 2012-11-01 12:51:31,108 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     [exec] 2012-11-01 12:51:31,119 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - DataNode metrics system started (again)
     [exec] 2012-11-01 12:51:31,119 INFO  datanode.DataNode (DataNode.java:<init>(313)) - Configured hostname is 127.0.0.1
     [exec] 2012-11-01 12:51:31,124 INFO  datanode.DataNode (DataNode.java:initDataXceiver(539)) - Opened streaming server at /127.0.0.1:45280
     [exec] 2012-11-01 12:51:31,126 INFO  datanode.DataNode (DataXceiverServer.java:<init>(77)) - Balancing bandwith is 1048576 bytes/s
     [exec] 2012-11-01 12:51:31,127 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-11-01 12:51:31,128 INFO  http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2012-11-01 12:51:31,128 INFO  http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2012-11-01 12:51:31,129 INFO  datanode.DataNode (DataNode.java:startInfoServer(365)) - Opened info server at localhost:0
     [exec] 2012-11-01 12:51:31,131 INFO  datanode.DataNode (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false
     [exec] 2012-11-01 12:51:31,131 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 55286
     [exec] 2012-11-01 12:51:31,131 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-11-01 12:51:31,269 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:55286
     [exec] 2012-11-01 12:51:31,276 INFO  ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 42421
     [exec] 2012-11-01 12:51:31,280 INFO  datanode.DataNode (DataNode.java:initIpcServer(436)) - Opened IPC server at /127.0.0.1:42421
     [exec] 2012-11-01 12:51:31,287 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(148)) - Refresh request received for nameservices: null
     [exec] 2012-11-01 12:51:31,289 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(193)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2012-11-01 12:51:31,296 INFO  datanode.DataNode (BPServiceActor.java:run(658)) - Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:40223 starting to offer service
     [exec] 2012-11-01 12:51:31,300 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder: starting
     [exec] 2012-11-01 12:51:31,300 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener on 42421: starting
     [exec] 2012-11-01 12:51:31,726 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:31,727 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted
     [exec] 2012-11-01 12:51:31,727 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ...
     [exec] 2012-11-01 12:51:31,732 INFO  common.Storage (Storage.java:tryLock(662)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:31,732 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted
     [exec] 2012-11-01 12:51:31,733 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ...
     [exec] 2012-11-01 12:51:31,770 INFO  common.Storage (Storage.java:lock(626)) - Locking is disabled
     [exec] 2012-11-01 12:51:31,771 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1372242316-67.195.138.27-1351774289159> is not formatted.
     [exec] 2012-11-01 12:51:31,771 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ...
     [exec] 2012-11-01 12:51:31,771 INFO  common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-1372242316-67.195.138.27-1351774289159 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1372242316-67.195.138.27-1351774289159/current>
     [exec] 2012-11-01 12:51:31,773 INFO  common.Storage (Storage.java:lock(626)) - Locking is disabled
     [exec] 2012-11-01 12:51:31,773 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1372242316-67.195.138.27-1351774289159> is not formatted.
     [exec] 2012-11-01 12:51:31,773 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ...
     [exec] 2012-11-01 12:51:31,774 INFO  common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-1372242316-67.195.138.27-1351774289159 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1372242316-67.195.138.27-1351774289159/current>
     [exec] 2012-11-01 12:51:31,777 INFO  datanode.DataNode (DataNode.java:initStorage(852)) - Setting up storage: nsid=1188264114;bpid=BP-1372242316-67.195.138.27-1351774289159;lv=-40;nsInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0;bpid=BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,791 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>
     [exec] 2012-11-01 12:51:31,791 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>
     [exec] 2012-11-01 12:51:31,796 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(1209)) - Registered FSDatasetState MBean
     [exec] 2012-11-01 12:51:31,800 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(243)) - Periodic Directory Tree Verification scan starting at 1351783360800 with interval 21600000
     [exec] 2012-11-01 12:51:31,801 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(1577)) - Adding block pool BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,808 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1833)) - Waiting for cluster to become active
     [exec] 2012-11-01 12:51:31,809 INFO  datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223 beginning handshake with NN
     [exec] 2012-11-01 12:51:31,811 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-956326259-67.195.138.27-45280-1351774291735, infoPort=55286, ipcPort=42421, storageInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0) storage DS-956326259-67.195.138.27-45280-1351774291735
     [exec] 2012-11-01 12:51:31,814 INFO  net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:45280
     [exec] 2012-11-01 12:51:31,815 INFO  datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223 successfully registered with NN
     [exec] 2012-11-01 12:51:31,815 INFO  datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:40223 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223 trying to claim ACTIVE state with txid=1
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223
     [exec] 2012-11-01 12:51:31,823 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:45280 after becoming active. Its block contents are no longer considered stale
     [exec] 2012-11-01 12:51:31,824 INFO  hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-956326259-67.195.138.27-45280-1351774291735, infoPort=55286, ipcPort=42421, storageInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0), blocks: 0, processing time: 2 msecs
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@1277a30
     [exec] 2012-11-01 12:51:31,827 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,831 INFO  datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-1372242316-67.195.138.27-1351774289159 to blockPoolScannerMap, new size=1
     [exec] Aborted
     [exec] 2012-11-01 12:51:31,913 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6ee9b68, pid=14319, tid=4137109200
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14319.log>
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:17.144s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:17.929s
[INFO] Finished at: Thu Nov 01 12:51:32 UTC 2012
[INFO] Final Memory: 26M/491M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4752
Updating MAPREDUCE-4724
Updating YARN-165
Updating YARN-166
Updating YARN-189
Updating YARN-159