You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2015/05/19 16:29:34 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #190

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/190/changes>

Changes:

[umamahesh] HDFS-8412. Fix the test failures in HTTPFS: In some tests setReplication called after fs close. Contributed by Uma Maheswara Rao G.

[aw] HADOOP-11884. test-patch.sh should pull the real findbugs version  (Kengo Seki via aw)

[aw] HADOOP-11944. add option to test-patch to avoid relocating patch process directory (Sean Busbey via aw)

[aw] HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)

[arp] HDFS-8345. Storage policy APIs must be exposed via the FileSystem interface. (Arpit Agarwal)

[szetszwo] HDFS-8405. Fix a typo in NamenodeFsck.  Contributed by Takanobu Asanuma

[raviprak] HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)

[xgong] YARN-3541. Add version info on timeline service / generic history web UI and REST API. Contributed by Zhijie Shen

[jing9] HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich Haase.

[vinayakumarb] HDFS-6348. SecondaryNameNode not terminating properly on runtime exceptions (Contributed by Rakesh R)

[aajisaka] HADOOP-10971. Add -C flag to make `hadoop fs -ls` print filenames only. Contributed by Kengo Seki.

[aajisaka] Move HADOOP-8934 in CHANGES.txt from 3.0.0 to 2.8.0.

[vinayakumarb] HADOOP-11103. Clean up RemoteException (Contributed by Sean Busbey)

[aajisaka] Move HADOOP-11581 in CHANGES.txt from 3.0.0 to 2.8.0.

------------------------------------------
[...truncated 8399 lines...]
     [exec] 2015-05-19 14:26:22,735 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-19 14:26:22,746 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:35907 starting to offer service
     [exec] 2015-05-19 14:26:22,752 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 46119: starting
     [exec] 2015-05-19 14:26:22,753 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-19 14:26:23,004 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 5676@asf906.gq1.ygridcore.net
     [exec] 2015-05-19 14:26:23,005 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,005 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-19 14:26:23,024 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,024 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756>
     [exec] 2015-05-19 14:26:23,025 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756> is not formatted for BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,025 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-19 14:26:23,025 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-7584180-67.195.81.150-1432045580756 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756/current>
     [exec] 2015-05-19 14:26:23,027 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 5676@asf906.gq1.ygridcore.net
     [exec] 2015-05-19 14:26:23,027 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,027 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-19 14:26:23,042 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,042 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756>
     [exec] 2015-05-19 14:26:23,042 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756> is not formatted for BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,043 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-19 14:26:23,043 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-7584180-67.195.81.150-1432045580756 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756/current>
     [exec] 2015-05-19 14:26:23,044 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=549639075;bpid=BP-7584180-67.195.81.150-1432045580756;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=549639075;c=0;bpid=BP-7584180-67.195.81.150-1432045580756;dnuuid=null
     [exec] 2015-05-19 14:26:23,046 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
     [exec] 2015-05-19 14:26:23,067 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-2108349f-36fa-4a07-9a8a-3dfa1545438e
     [exec] 2015-05-19 14:26:23,068 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-19 14:26:23,068 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a
     [exec] 2015-05-19 14:26:23,068 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-19 14:26:23,071 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-19 14:26:23,078 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan starting at 1432066256078 with interval 21600000
     [exec] 2015-05-19 14:26:23,078 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,079 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-7584180-67.195.81.150-1432045580756 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-19 14:26:23,079 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-7584180-67.195.81.150-1432045580756 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-19 14:26:23,094 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-7584180-67.195.81.150-1432045580756 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 15ms
     [exec] 2015-05-19 14:26:23,095 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-19 14:26:23,095 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-19 14:26:23,095 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-7584180-67.195.81.150-1432045580756 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 15ms
     [exec] 2015-05-19 14:26:23,095 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-7584180-67.195.81.150-1432045580756: 16ms
     [exec] 2015-05-19 14:26:23,096 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-7584180-67.195.81.150-1432045580756 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-19 14:26:23,096 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756/current/replicas> doesn't exist 
     [exec] 2015-05-19 14:26:23,097 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-7584180-67.195.81.150-1432045580756 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-19 14:26:23,097 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-7584180-67.195.81.150-1432045580756 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 1ms
     [exec] 2015-05-19 14:26:23,097 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756/current/replicas> doesn't exist 
     [exec] 2015-05-19 14:26:23,097 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-7584180-67.195.81.150-1432045580756 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-19 14:26:23,097 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-19 14:26:23,099 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907 beginning handshake with NN
     [exec] 2015-05-19 14:26:23,110 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:56401, datanodeUuid=a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb, infoPort=54329, infoSecurePort=0, ipcPort=46119, storageInfo=lv=-56;cid=testClusterID;nsid=549639075;c=0) storage a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
     [exec] 2015-05-19 14:26:23,110 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-19 14:26:23,111 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:56401
     [exec] 2015-05-19 14:26:23,115 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907 successfully registered with NN
     [exec] 2015-05-19 14:26:23,115 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:35907 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-19 14:26:23,125 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-19 14:26:23,125 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-2108349f-36fa-4a07-9a8a-3dfa1545438e for DN 127.0.0.1:56401
     [exec] 2015-05-19 14:26:23,126 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a for DN 127.0.0.1:56401
     [exec] 2015-05-19 14:26:23,134 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-19 14:26:23,135 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
     [exec] 2015-05-19 14:26:23,147 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a from datanode a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
     [exec] 2015-05-19 14:26:23,147 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a node DatanodeRegistration(127.0.0.1:56401, datanodeUuid=a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb, infoPort=54329, infoSecurePort=0, ipcPort=46119, storageInfo=lv=-56;cid=testClusterID;nsid=549639075;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-19 14:26:23,148 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-2108349f-36fa-4a07-9a8a-3dfa1545438e from datanode a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
     [exec] 2015-05-19 14:26:23,148 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-2108349f-36fa-4a07-9a8a-3dfa1545438e node DatanodeRegistration(127.0.0.1:56401, datanodeUuid=a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb, infoPort=54329, infoSecurePort=0, ipcPort=46119, storageInfo=lv=-56;cid=testClusterID;nsid=549639075;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-19 14:26:23,163 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x9ad282ae030b6f0e,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 25 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-19 14:26:23,163 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,204 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-19 14:26:23,210 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-19 14:26:23,210 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-19 14:26:23,211 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-19 14:26:23,211 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-19 14:26:23,212 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-19 14:26:23,323 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 46119
     [exec] 2015-05-19 14:26:23,324 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 46119
     [exec] 2015-05-19 14:26:23,324 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-19 14:26:23,324 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907 interrupted
     [exec] 2015-05-19 14:26:23,324 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
     [exec] 2015-05-19 14:26:23,426 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb)
     [exec] 2015-05-19 14:26:23,426 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-7584180-67.195.81.150-1432045580756
     [exec] 2015-05-19 14:26:23,427 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-19 14:26:23,427 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-19 14:26:23,427 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-19 14:26:23,427 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-19 14:26:23,433 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-19 14:26:23,433 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-19 14:26:23,435 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-19 14:26:23,435 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-19 14:26:23,435 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-19 14:26:23,436 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 2 1 
     [exec] 2015-05-19 14:26:23,438 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-19 14:26:23,439 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-19 14:26:23,440 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 35907
     [exec] 2015-05-19 14:26:23,441 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 35907
     [exec] 2015-05-19 14:26:23,441 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-19 14:26:23,441 INFO  blockmanagement.BlockManager (BlockManager.java:run(3686)) - Stopping ReplicationMonitor.
     [exec] 2015-05-19 14:26:23,477 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-19 14:26:23,477 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state
     [exec] 2015-05-19 14:26:23,479 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-19 14:26:23,580 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-19 14:26:23,582 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-19 14:26:23,583 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.973 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:53 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-05-19T14:28:29+00:00
[INFO] Final Memory: 54M/265M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 805181 bytes
Compression is 0.0%
Took 39 sec
Recording test results
Updating HADOOP-11581
Updating HADOOP-11949
Updating HADOOP-11944
Updating YARN-3541
Updating HADOOP-1540
Updating HADOOP-8934
Updating HADOOP-10971
Updating HADOOP-11103
Updating HADOOP-11884
Updating HDFS-8345
Updating HDFS-8405
Updating HDFS-8412
Updating HDFS-6348
Updating HDFS-4185

Hadoop-Hdfs-trunk-Java8 - Build # 192 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/192/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7479 lines...]
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.261 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.077 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:50 h
[INFO] Finished at: 2015-05-21T14:31:28+00:00
[INFO] Final Memory: 52M/229M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797829 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating HDFS-4383
Updating HADOOP-10366
Updating HADOOP-11772
Updating YARN-2918
Updating YARN-3654
Updating YARN-3609
Updating YARN-3681
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1240)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1240)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1685)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1720)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
End of File Exception between local host is: "asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: "localhost":41478; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: "localhost":41478; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1444)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy21.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:237)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1355)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1286)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:449)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:464)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1098)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:993)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:443)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2436)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf905.gq1.ygridcore.net/67.195.81.149 to localhost:41478 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 to localhost:41478 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:628)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1500)
	at org.apache.hadoop.ipc.Client.call(Client.java:1410)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy21.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:237)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1355)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1286)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:449)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:464)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1734)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1721)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1714)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)



Hadoop-Hdfs-trunk-Java8 - Build # 193 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/193/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8598 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.247 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.045 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-05-22T14:30:18+00:00
[INFO] Final Memory: 55M/253M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:127 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797637 bytes
Compression is 0.0%
Took 28 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk-Java8 - Build # 195 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/195/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8584 lines...]
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.092 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.071 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-24T14:26:46+00:00
[INFO] Final Memory: 54M/164M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:127 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797647 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk-Java8 - Build # 201 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/201/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7363 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:00 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.076 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-05-29T14:40:15+00:00
[INFO] Final Memory: 52M/161M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 808006 bytes
Compression is 0.0%
Took 42 sec
Recording test results
Updating HADOOP-11934
Updating HADOOP-12042
Updating YARN-3716
Updating HDFS-7401
Updating HDFS-8443
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<21> but was:<20>

Stack Trace:
java.lang.AssertionError: expected:<21> but was:<20>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:463)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas

Error Message:

Expected: is <DISK>
     but: was <RAM_DISK>

Stack Trace:
java.lang.AssertionError: 
Expected: is <DISK>
     but: was <RAM_DISK>
	at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
	at org.junit.Assert.assertThat(Assert.java:865)
	at org.junit.Assert.assertThat(Assert.java:832)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:138)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:53)



Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #202

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/202/changes>


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #201

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/201/changes>

Changes:

[cnauroth] HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop. Contributed by Larry McCay.

[vinodkv] Fixed more FilesSystemRMStateStore issues. Contributed by Vinod Kumar Vavilapalli.

[wangda] YARN-3716. Node-label-expression should be included by ResourceRequestPBImpl.toString. (Xianyin Xin via wangda)

[aajisaka] HDFS-8443. Document dfs.namenode.service.handler.count in hdfs-site.xml. Contributed by J.Andreina.

[vinayakumarb] HDFS-7401. Add block info to DFSInputStream' WARN message when it adds node to deadNodes (Contributed by Arshad Mohammad)

[vinayakumarb] HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by Andreina J)

------------------------------------------
[...truncated 7170 lines...]
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.128 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.386 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec - in org.apache.hadoop.fs.TestXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.735 sec - in org.apache.hadoop.fs.TestUnbuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.438 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.768 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.159 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.207 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.83 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.578 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.802 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.201 sec - in org.apache.hadoop.fs.TestVolumeId
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.754 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.034 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.858 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.799 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.113 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.679 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.749 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.93 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.582 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.766 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.326 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.941 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.669 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.7 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.051 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.021 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.798 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.39 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.704 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.897 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.275 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.765 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.375 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.838 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.348 sec - in org.apache.hadoop.security.TestPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.088 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.939 sec - in org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.494 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.23 sec - in org.apache.hadoop.tracing.TestTracing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.218 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.396 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.993 sec - in org.apache.hadoop.net.TestNetworkTopology
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.001 sec - in org.apache.hadoop.TestGenericRefresh
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.548 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.505 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.298 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.14 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.475 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Failed tests: 
  TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas:53->LazyPersistTestCase.ensureFileReplicasOnStorageType:138 
Expected: is <DISK>
     but: was <RAM_DISK>
  TestDNFencing.testQueueingWithAppend:463 expected:<21> but was:<20>

Tests run: 3439, Failures: 2, Errors: 0, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:00 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.076 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-05-29T14:40:15+00:00
[INFO] Final Memory: 52M/161M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 808006 bytes
Compression is 0.0%
Took 42 sec
Recording test results
Updating HADOOP-11934
Updating HADOOP-12042
Updating YARN-3716
Updating HDFS-7401
Updating HDFS-8443

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #200

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/200/changes>

Changes:

[aw] HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey via aw)

[aw] HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo Seki via aw)

[aw] HADOOP-12030. test-patch should only report on newly introduced findbugs warnings. (Sean Busbey via aw)

[xgong] YARN-3723. Need to clearly document primaryFilter and otherInfo value

[aw] HADOOP-11406. xargs -P is not portable (Kengo Seki via aw)

[aw] HADOOP-11142. Remove hdfs dfs reference from file system shell documentation (Kengo Seki via aw)

[aw] HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts (Kengo Seki via aw)

[aw] HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do (Sangjin Lee via aw)

[cmccabe] HDFS-8407. libhdfs hdfsListDirectory must set errno to 0 on success (Masatake Iwasaki via Colin P. McCabe)

[cmccabe] HDFS-8429. Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread.  (zhouyingchao via cmccabe)

[cmccabe] HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake Iwasaki via Colin P. McCabe)

[aw] HADOOP-11930. test-patch in offline mode should tell maven to be in offline mode (Sean Busbey via aw)

[cnauroth] HADOOP-11959. WASB should configure client side socket timeout in storage client blob request options. Contributed by Ivan Mitic.

[aw]  HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits (aw)

------------------------------------------
[...truncated 6906 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.513 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.965 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.784 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.121 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.48 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.176 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.437 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.298 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.508 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.431 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.044 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.305 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.866 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.754 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestClusterId
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.404 sec - in org.apache.hadoop.hdfs.server.namenode.TestClusterId
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.063 sec - in org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.936 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.377 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.009 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestQuotaByStorageType
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.126 sec - in org.apache.hadoop.hdfs.server.namenode.TestQuotaByStorageType
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.056 sec - in org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.774 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.201 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.121 sec - in org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.004 sec - in org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.794 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.031 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.987 sec - in org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.095 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.322 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 83.798 sec - in org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.581 sec - in org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.028 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.632 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.585 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.607 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.534 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.157 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.564 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlockRetry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.23 sec - in org.apache.hadoop.hdfs.server.namenode.TestAddBlockRetry
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.96 sec - in org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.536 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.789 sec - in org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.065 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.634 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.916 sec - in org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.612 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.764 sec - in org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics

Results :

Tests in error: 
  TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas:48 » Bind Proble...

Tests run: 2468, Failures: 0, Errors: 1, Skipped: 7

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:07 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.089 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2015-05-28T23:37:59+00:00
[INFO] Final Memory: 64M/250M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5805452161273308640.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5837483741499838868tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2568232353684889632475tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 808056 bytes
Compression is 0.0%
Took 26 sec
Recording test results
Updating HDFS-8407
Updating HADOOP-11983
Updating HDFS-8429
Updating HADOOP-11894
Updating HADOOP-11406
Updating HADOOP-12035
Updating HADOOP-11959
Updating HADOOP-11142
Updating HADOOP-11930
Updating HADOOP-12004
Updating HADOOP-12022
Updating HADOOP-7947
Updating HADOOP-12030
Updating YARN-3723

Hadoop-Hdfs-trunk-Java8 - Build # 200 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/200/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7099 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:07 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.089 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2015-05-28T23:37:59+00:00
[INFO] Final Memory: 64M/250M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5805452161273308640.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5837483741499838868tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2568232353684889632475tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 808056 bytes
Compression is 0.0%
Took 26 sec
Recording test results
Updating HDFS-8407
Updating HADOOP-11983
Updating HDFS-8429
Updating HADOOP-11894
Updating HADOOP-11406
Updating HADOOP-12035
Updating HADOOP-11959
Updating HADOOP-11142
Updating HADOOP-11930
Updating HADOOP-12004
Updating HADOOP-12022
Updating HADOOP-7947
Updating HADOOP-12030
Updating YARN-3723
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas

Error Message:
Problem binding to [localhost:33484] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:33484] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:414)
	at sun.nio.ch.Net.bind(Net.java:406)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.apache.hadoop.ipc.Server.bind(Server.java:413)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:590)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:2338)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:945)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
	at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
	at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:828)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1146)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:433)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2419)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2307)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2354)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2041)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2080)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2060)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:48)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #199

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/changes>

Changes:

[wheat9] Update CHANGES.txt for HDFS-8135.

[wangda] YARN-3647. RMWebServices api's should use updated api from CommonNodeLabelsManager to get NodeLabel object. (Sunil G via wangda)

[wangda] MAPREDUCE-6304. Specifying node labels when submitting MR jobs. (Naganarasimha G R via wangda)

[cnauroth] YARN-3626. On Windows localized resources are not moved to the front of the classpath when they should be. Contributed by Craig Welch.

[gera] MAPREDUCE-6336. Enable v2 FileOutputCommitter by default. (Siqi Li via gera)

[wangda] YARN-3581. Deprecate -directlyAccessNodeLabelStore in RMAdminCLI. (Naganarasimha G R via wangda)

[wang] HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang.

[aw] HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw)

[aw] YARN-2355. MAX_APP_ATTEMPTS_ENV may no longer be a useful env var for a container (Darrell Taylor via aw)

[aw] HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source (Darrell Taylor via aw)

[zjshen] YARN-3700. Made generic history service load a number of latest applications according to the parameter or the configuration. Contributed by Xuan Gong.

[cnauroth] HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer.

[devaraj] YARN-3722. Merge multiple TestWebAppUtils into

------------------------------------------
[...truncated 7305 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.017 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.816 sec - in org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.525 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.138 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.215 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.512 sec - in org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.41 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.788 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.fs.TestXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.597 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.621 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.912 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.9 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.597 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.185 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.734 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.032 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.637 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.559 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.891 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.001 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.389 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.6 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.748 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.544 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.04 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.133 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.609 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.71 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.974 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.619 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.521 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.74 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.285 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.214 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.213 sec - in org.apache.hadoop.fs.TestVolumeId
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.205 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.611 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.631 sec - in org.apache.hadoop.fs.TestGlobPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.71 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.474 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.497 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.486 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.589 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.778 sec - in org.apache.hadoop.fs.TestUnbuffer

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:426->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:445 » EOF
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » Connect Call From asf909.gq1.yg...

Tests run: 3440, Failures: 1, Errors: 4, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.865 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:47 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.060 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:48 h
[INFO] Finished at: 2015-05-28T14:24:35+00:00
[INFO] Final Memory: 52M/256M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 806642 bytes
Compression is 0.0%
Took 26 sec
Recording test results
Updating YARN-3700
Updating YARN-3581
Updating MAPREDUCE-6304
Updating YARN-2355
Updating HADOOP-9891
Updating YARN-3722
Updating HDFS-8431
Updating HDFS-8482
Updating HDFS-8135
Updating MAPREDUCE-6336
Updating HDFS-5033
Updating YARN-3626
Updating YARN-3647

Hadoop-Hdfs-trunk-Java8 - Build # 199 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7498 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.865 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:47 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.060 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:48 h
[INFO] Finished at: 2015-05-28T14:24:35+00:00
[INFO] Final Memory: 52M/256M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 806642 bytes
Compression is 0.0%
Took 26 sec
Recording test results
Updating YARN-3700
Updating YARN-3581
Updating MAPREDUCE-6304
Updating YARN-2355
Updating HADOOP-9891
Updating YARN-3722
Updating HDFS-8431
Updating HDFS-8482
Updating HDFS-8135
Updating MAPREDUCE-6336
Updating HDFS-5033
Updating YARN-3626
Updating YARN-3647
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1238)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1238)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1685)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1720)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
End of File Exception between local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":49173; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":49173; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1444)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy21.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:237)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1355)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1286)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:449)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:464)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1098)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:993)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:443)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2436)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf909.gq1.ygridcore.net/67.195.81.153 to localhost:49173 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf909.gq1.ygridcore.net/67.195.81.153 to localhost:49173 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:628)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1500)
	at org.apache.hadoop.ipc.Client.call(Client.java:1410)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy21.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:237)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1355)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1286)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:449)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:464)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1734)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1721)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1714)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #198

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/198/changes>

Changes:

[ozawa] MAPREDUCE-6364. Add a Kill link to Task Attempts page. Contributed by Ryu Kobayashi.

[vinodkv] YARN-160. Enhanced NodeManager to automatically obtain cpu/memory values from underlying OS when configured to do so. Contributed by Varun Vasudev.

[jianhe] YARN-3632. Ordering policy should be allowed to reorder an application when demand changes. Contributed by Craig Welch

[cmccabe] HADOOP-11969. ThreadLocal initialization in several classes is not thread safe (Sean Busbey via Colin P. McCabe)

[wangda] YARN-3686. CapacityScheduler should trim default_node_label_expression. (Sunil G via wangda)

[aajisaka] HADOOP-11242. Record the time of calling in tracing span of IPC server. Contributed by Mastake Iwasaki.

------------------------------------------
[...truncated 7295 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.5 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in org.apache.hadoop.fs.TestXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.561 sec - in org.apache.hadoop.fs.TestUnbuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.625 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.572 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.165 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.237 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.938 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.663 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.828 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec - in org.apache.hadoop.fs.TestVolumeId
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.668 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.164 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.572 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.694 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.352 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.35 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.599 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.201 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.662 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.739 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.899 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.758 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.817 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.781 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.213 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.973 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.423 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.513 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.919 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.682 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.064 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.876 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.303 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.924 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.327 sec - in org.apache.hadoop.security.TestPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.07 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.131 sec - in org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.52 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.134 sec - in org.apache.hadoop.tracing.TestTracing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.086 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.728 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.15 sec - in org.apache.hadoop.net.TestNetworkTopology
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.983 sec - in org.apache.hadoop.TestGenericRefresh
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.435 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.577 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.234 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.171 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.611 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:426->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:445 » EOF
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » Connect Call From asf904.gq1.yg...

Tests run: 3439, Failures: 1, Errors: 4, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 52.224 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.128 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-05-27T14:34:21+00:00
[INFO] Final Memory: 52M/166M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 807967 bytes
Compression is 0.0%
Took 25 sec
Recording test results
Updating YARN-3686
Updating HADOOP-11242
Updating YARN-160
Updating HADOOP-11969
Updating YARN-3632
Updating MAPREDUCE-6364

Hadoop-Hdfs-trunk-Java8 - Build # 198 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/198/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7488 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 52.224 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.128 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-05-27T14:34:21+00:00
[INFO] Final Memory: 52M/166M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 807967 bytes
Compression is 0.0%
Took 25 sec
Recording test results
Updating YARN-3686
Updating HADOOP-11242
Updating YARN-160
Updating HADOOP-11969
Updating YARN-3632
Updating MAPREDUCE-6364
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1238)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:631)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1298)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1238)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1716)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1685)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1720)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1796)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1847)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1827)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":38381; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":38381; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1444)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy21.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:237)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1355)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1286)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:449)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:464)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1098)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:993)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:443)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2436)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:38381 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:38381 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:628)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1500)
	at org.apache.hadoop.ipc.Client.call(Client.java:1410)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy21.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:237)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1355)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1286)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:449)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:464)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1734)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1721)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1714)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #197

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/197/changes>

Changes:

[xgong] YARN-2238. Filtering on UI sticks even if I move away from the page.

[aajisaka] HADOOP-8751. NPE in Token.toString() when Token is constructed using null identifier. Contributed by kanaka kumar avvaru.

[ozawa] YARN-2336. Fair scheduler's REST API returns a missing '[' bracket JSON for deep queue tree. Contributed by Kenji Kikushima and Akira Ajisaka.

------------------------------------------
[...truncated 8400 lines...]
     [exec] 2015-05-26 14:24:30,028 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-26 14:24:30,030 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 35575
     [exec] 2015-05-26 14:24:30,030 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-26 14:24:30,401 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:35575
     [exec] 2015-05-26 14:24:30,532 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(159)) - Listening HTTP traffic on /127.0.0.1:56228
     [exec] 2015-05-26 14:24:30,533 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-26 14:24:30,533 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-26 14:24:30,547 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-26 14:24:30,548 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 48880
     [exec] 2015-05-26 14:24:30,553 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:48880
     [exec] 2015-05-26 14:24:30,564 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-26 14:24:30,567 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-26 14:24:30,577 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:50579 starting to offer service
     [exec] 2015-05-26 14:24:30,583 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-26 14:24:30,583 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 48880: starting
     [exec] 2015-05-26 14:24:30,796 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 15188@asf905.gq1.ygridcore.net
     [exec] 2015-05-26 14:24:30,796 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:30,796 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-26 14:24:31,086 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,086 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1785760722-67.195.81.149-1432650268599>
     [exec] 2015-05-26 14:24:31,087 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1785760722-67.195.81.149-1432650268599> is not formatted for BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,087 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-26 14:24:31,087 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1785760722-67.195.81.149-1432650268599 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1785760722-67.195.81.149-1432650268599/current>
     [exec] 2015-05-26 14:24:31,090 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 15188@asf905.gq1.ygridcore.net
     [exec] 2015-05-26 14:24:31,090 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,090 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-26 14:24:31,107 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,107 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1785760722-67.195.81.149-1432650268599>
     [exec] 2015-05-26 14:24:31,107 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1785760722-67.195.81.149-1432650268599> is not formatted for BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,107 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-26 14:24:31,107 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1785760722-67.195.81.149-1432650268599 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1785760722-67.195.81.149-1432650268599/current>
     [exec] 2015-05-26 14:24:31,109 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=201986804;bpid=BP-1785760722-67.195.81.149-1432650268599;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=201986804;c=0;bpid=BP-1785760722-67.195.81.149-1432650268599;dnuuid=null
     [exec] 2015-05-26 14:24:31,111 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID a07be589-49a9-4043-8e14-817e04a6be96
     [exec] 2015-05-26 14:24:31,131 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-30276258-2188-456d-98b2-9309c2c1c2b0
     [exec] 2015-05-26 14:24:31,131 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-26 14:24:31,132 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-48bee9db-f500-4cc8-b197-bb03db2c3dbc
     [exec] 2015-05-26 14:24:31,132 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-26 14:24:31,135 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-26 14:24:31,143 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432656201143 with interval 21600000
     [exec] 2015-05-26 14:24:31,144 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,145 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1785760722-67.195.81.149-1432650268599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-26 14:24:31,146 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1785760722-67.195.81.149-1432650268599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-26 14:24:31,169 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1785760722-67.195.81.149-1432650268599 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 24ms
     [exec] 2015-05-26 14:24:31,169 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1785760722-67.195.81.149-1432650268599 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 24ms
     [exec] 2015-05-26 14:24:31,170 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-1785760722-67.195.81.149-1432650268599: 26ms
     [exec] 2015-05-26 14:24:31,171 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1785760722-67.195.81.149-1432650268599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-26 14:24:31,171 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1785760722-67.195.81.149-1432650268599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-26 14:24:31,171 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1785760722-67.195.81.149-1432650268599/current/replicas> doesn't exist 
     [exec] 2015-05-26 14:24:31,171 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1785760722-67.195.81.149-1432650268599/current/replicas> doesn't exist 
     [exec] 2015-05-26 14:24:31,172 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1785760722-67.195.81.149-1432650268599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 1ms
     [exec] 2015-05-26 14:24:31,172 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1785760722-67.195.81.149-1432650268599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 1ms
     [exec] 2015-05-26 14:24:31,173 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 3ms
     [exec] 2015-05-26 14:24:31,174 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96) service to localhost/127.0.0.1:50579 beginning handshake with NN
     [exec] 2015-05-26 14:24:31,181 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-26 14:24:31,181 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-26 14:24:31,184 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:57806, datanodeUuid=a07be589-49a9-4043-8e14-817e04a6be96, infoPort=56228, infoSecurePort=0, ipcPort=48880, storageInfo=lv=-56;cid=testClusterID;nsid=201986804;c=0) storage a07be589-49a9-4043-8e14-817e04a6be96
     [exec] 2015-05-26 14:24:31,185 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-26 14:24:31,186 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:57806
     [exec] 2015-05-26 14:24:31,194 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96) service to localhost/127.0.0.1:50579 successfully registered with NN
     [exec] 2015-05-26 14:24:31,194 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:50579 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-26 14:24:31,205 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-26 14:24:31,205 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-30276258-2188-456d-98b2-9309c2c1c2b0 for DN 127.0.0.1:57806
     [exec] 2015-05-26 14:24:31,207 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-48bee9db-f500-4cc8-b197-bb03db2c3dbc for DN 127.0.0.1:57806
     [exec] 2015-05-26 14:24:31,216 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96) service to localhost/127.0.0.1:50579 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-26 14:24:31,216 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96) service to localhost/127.0.0.1:50579
     [exec] 2015-05-26 14:24:31,229 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-30276258-2188-456d-98b2-9309c2c1c2b0 from datanode a07be589-49a9-4043-8e14-817e04a6be96
     [exec] 2015-05-26 14:24:31,230 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-30276258-2188-456d-98b2-9309c2c1c2b0 node DatanodeRegistration(127.0.0.1:57806, datanodeUuid=a07be589-49a9-4043-8e14-817e04a6be96, infoPort=56228, infoSecurePort=0, ipcPort=48880, storageInfo=lv=-56;cid=testClusterID;nsid=201986804;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs
     [exec] 2015-05-26 14:24:31,230 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-48bee9db-f500-4cc8-b197-bb03db2c3dbc from datanode a07be589-49a9-4043-8e14-817e04a6be96
     [exec] 2015-05-26 14:24:31,231 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-48bee9db-f500-4cc8-b197-bb03db2c3dbc node DatanodeRegistration(127.0.0.1:57806, datanodeUuid=a07be589-49a9-4043-8e14-817e04a6be96, infoPort=56228, infoSecurePort=0, ipcPort=48880, storageInfo=lv=-56;cid=testClusterID;nsid=201986804;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs
     [exec] 2015-05-26 14:24:31,250 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x5628a286760111c9,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 30 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-26 14:24:31,250 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,290 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-26 14:24:31,294 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-26 14:24:31,295 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-26 14:24:31,296 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-26 14:24:31,295 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-26 14:24:31,301 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-26 14:24:31,314 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 48880
     [exec] 2015-05-26 14:24:31,316 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 48880
     [exec] 2015-05-26 14:24:31,316 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-26 14:24:31,316 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96) service to localhost/127.0.0.1:50579 interrupted
     [exec] 2015-05-26 14:24:31,316 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96) service to localhost/127.0.0.1:50579
     [exec] 2015-05-26 14:24:31,420 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1785760722-67.195.81.149-1432650268599 (Datanode Uuid a07be589-49a9-4043-8e14-817e04a6be96)
     [exec] 2015-05-26 14:24:31,420 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-1785760722-67.195.81.149-1432650268599
     [exec] 2015-05-26 14:24:31,423 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-26 14:24:31,423 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-26 14:24:31,423 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-26 14:24:31,423 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-26 14:24:31,428 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-26 14:24:31,428 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-26 14:24:31,429 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-26 14:24:31,429 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-26 14:24:31,429 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-26 14:24:31,430 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 0 
     [exec] 2015-05-26 14:24:31,432 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-26 14:24:31,433 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-26 14:24:31,435 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 50579
     [exec] 2015-05-26 14:24:31,436 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 50579
     [exec] 2015-05-26 14:24:31,436 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-26 14:24:31,436 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-26 14:24:31,473 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-26 14:24:31,474 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-26 14:24:31,475 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-26 14:24:31,575 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-26 14:24:31,576 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-26 14:24:31,577 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.943 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.077 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-26T14:26:41+00:00
[INFO] Final Memory: 54M/259M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797750 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-2336
Updating HADOOP-8751
Updating YARN-2238

Hadoop-Hdfs-trunk-Java8 - Build # 197 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/197/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8593 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.943 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.077 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-26T14:26:41+00:00
[INFO] Final Memory: 54M/259M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:127 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797750 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-2336
Updating HADOOP-8751
Updating YARN-2238
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #196

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/196/changes>

Changes:

[wheat9] HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang.

------------------------------------------
[...truncated 9275 lines...]
     [exec] 2015-05-25 14:24:51,888 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
     [exec] 2015-05-25 14:24:51,889 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-25 14:24:51,889 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-25 14:24:51,890 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 40842
     [exec] 2015-05-25 14:24:51,890 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-25 14:24:52,250 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:40842
     [exec] 2015-05-25 14:24:52,383 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(159)) - Listening HTTP traffic on /127.0.0.1:36722
     [exec] 2015-05-25 14:24:52,385 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-25 14:24:52,385 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-25 14:24:52,398 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-25 14:24:52,399 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 43612
     [exec] 2015-05-25 14:24:52,405 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:43612
     [exec] 2015-05-25 14:24:52,417 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-25 14:24:52,420 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-25 14:24:52,430 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:39217 starting to offer service
     [exec] 2015-05-25 14:24:52,438 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-25 14:24:52,439 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 43612: starting
     [exec] 2015-05-25 14:24:52,678 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 20898@asf901.gq1.ygridcore.net
     [exec] 2015-05-25 14:24:52,679 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,679 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-25 14:24:52,708 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,709 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-57386276-67.195.81.145-1432563890464>
     [exec] 2015-05-25 14:24:52,709 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-57386276-67.195.81.145-1432563890464> is not formatted for BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,710 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-25 14:24:52,710 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-57386276-67.195.81.145-1432563890464 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-57386276-67.195.81.145-1432563890464/current>
     [exec] 2015-05-25 14:24:52,712 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 20898@asf901.gq1.ygridcore.net
     [exec] 2015-05-25 14:24:52,712 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,712 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-25 14:24:52,735 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,735 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-57386276-67.195.81.145-1432563890464>
     [exec] 2015-05-25 14:24:52,735 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-57386276-67.195.81.145-1432563890464> is not formatted for BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,735 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-25 14:24:52,735 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-57386276-67.195.81.145-1432563890464 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-57386276-67.195.81.145-1432563890464/current>
     [exec] 2015-05-25 14:24:52,737 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=1719466150;bpid=BP-57386276-67.195.81.145-1432563890464;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=1719466150;c=0;bpid=BP-57386276-67.195.81.145-1432563890464;dnuuid=null
     [exec] 2015-05-25 14:24:52,739 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 0e642b02-0889-426c-bf09-21171e18df0a
     [exec] 2015-05-25 14:24:52,774 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-822b46a7-90f3-4373-9aff-267da15132e7
     [exec] 2015-05-25 14:24:52,775 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-25 14:24:52,775 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-a8a8ba16-8a33-40f6-9439-b1f4d3b3d03c
     [exec] 2015-05-25 14:24:52,775 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-25 14:24:52,779 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-25 14:24:52,779 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-25 14:24:52,779 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-25 14:24:52,785 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432584279785 with interval 21600000
     [exec] 2015-05-25 14:24:52,785 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,786 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-57386276-67.195.81.145-1432563890464 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-25 14:24:52,787 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-57386276-67.195.81.145-1432563890464 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-25 14:24:52,799 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-57386276-67.195.81.145-1432563890464 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 13ms
     [exec] 2015-05-25 14:24:52,799 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-57386276-67.195.81.145-1432563890464 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 11ms
     [exec] 2015-05-25 14:24:52,799 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-57386276-67.195.81.145-1432563890464: 13ms
     [exec] 2015-05-25 14:24:52,800 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-57386276-67.195.81.145-1432563890464 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-25 14:24:52,800 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-57386276-67.195.81.145-1432563890464 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-25 14:24:52,800 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-57386276-67.195.81.145-1432563890464/current/replicas> doesn't exist 
     [exec] 2015-05-25 14:24:52,800 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-57386276-67.195.81.145-1432563890464/current/replicas> doesn't exist 
     [exec] 2015-05-25 14:24:52,801 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-57386276-67.195.81.145-1432563890464 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-25 14:24:52,801 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-57386276-67.195.81.145-1432563890464 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-25 14:24:52,801 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-25 14:24:52,803 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a) service to localhost/127.0.0.1:39217 beginning handshake with NN
     [exec] 2015-05-25 14:24:52,814 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:35933, datanodeUuid=0e642b02-0889-426c-bf09-21171e18df0a, infoPort=36722, infoSecurePort=0, ipcPort=43612, storageInfo=lv=-56;cid=testClusterID;nsid=1719466150;c=0) storage 0e642b02-0889-426c-bf09-21171e18df0a
     [exec] 2015-05-25 14:24:52,815 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-25 14:24:52,816 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:35933
     [exec] 2015-05-25 14:24:52,820 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a) service to localhost/127.0.0.1:39217 successfully registered with NN
     [exec] 2015-05-25 14:24:52,821 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:39217 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-25 14:24:52,831 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-25 14:24:52,831 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-822b46a7-90f3-4373-9aff-267da15132e7 for DN 127.0.0.1:35933
     [exec] 2015-05-25 14:24:52,833 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-a8a8ba16-8a33-40f6-9439-b1f4d3b3d03c for DN 127.0.0.1:35933
     [exec] 2015-05-25 14:24:52,842 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a) service to localhost/127.0.0.1:39217 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-25 14:24:52,842 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a) service to localhost/127.0.0.1:39217
     [exec] 2015-05-25 14:24:52,854 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-a8a8ba16-8a33-40f6-9439-b1f4d3b3d03c from datanode 0e642b02-0889-426c-bf09-21171e18df0a
     [exec] 2015-05-25 14:24:52,855 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-a8a8ba16-8a33-40f6-9439-b1f4d3b3d03c node DatanodeRegistration(127.0.0.1:35933, datanodeUuid=0e642b02-0889-426c-bf09-21171e18df0a, infoPort=36722, infoSecurePort=0, ipcPort=43612, storageInfo=lv=-56;cid=testClusterID;nsid=1719466150;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs
     [exec] 2015-05-25 14:24:52,855 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-822b46a7-90f3-4373-9aff-267da15132e7 from datanode 0e642b02-0889-426c-bf09-21171e18df0a
     [exec] 2015-05-25 14:24:52,856 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-822b46a7-90f3-4373-9aff-267da15132e7 node DatanodeRegistration(127.0.0.1:35933, datanodeUuid=0e642b02-0889-426c-bf09-21171e18df0a, infoPort=36722, infoSecurePort=0, ipcPort=43612, storageInfo=lv=-56;cid=testClusterID;nsid=1719466150;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-25 14:24:52,871 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x27ffe037657d8f87,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 2 msec to generate and 26 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-25 14:24:52,871 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:52,887 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-25 14:24:52,891 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-25 14:24:52,891 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-25 14:24:52,892 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-25 14:24:52,892 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-25 14:24:52,894 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-25 14:24:52,905 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 43612
     [exec] 2015-05-25 14:24:52,906 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 43612
     [exec] 2015-05-25 14:24:52,906 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a) service to localhost/127.0.0.1:39217 interrupted
     [exec] 2015-05-25 14:24:52,906 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-25 14:24:52,907 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a) service to localhost/127.0.0.1:39217
     [exec] 2015-05-25 14:24:53,010 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-57386276-67.195.81.145-1432563890464 (Datanode Uuid 0e642b02-0889-426c-bf09-21171e18df0a)
     [exec] 2015-05-25 14:24:53,010 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-57386276-67.195.81.145-1432563890464
     [exec] 2015-05-25 14:24:53,011 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-25 14:24:53,012 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-25 14:24:53,012 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-25 14:24:53,012 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-25 14:24:53,018 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-25 14:24:53,018 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-25 14:24:53,019 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-25 14:24:53,019 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-25 14:24:53,020 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 0 
     [exec] 2015-05-25 14:24:53,020 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-25 14:24:53,022 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-25 14:24:53,022 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-25 14:24:53,024 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 39217
     [exec] 2015-05-25 14:24:53,024 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 39217
     [exec] 2015-05-25 14:24:53,026 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-25 14:24:53,026 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-25 14:24:53,057 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-25 14:24:53,057 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-25 14:24:53,058 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-25 14:24:53,159 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-25 14:24:53,160 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-25 14:24:53,160 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:08 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.102 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-25T14:27:02+00:00
[INFO] Final Memory: 55M/257M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 807149 bytes
Compression is 0.0%
Took 27 sec
Recording test results
Updating HDFS-8377

Hadoop-Hdfs-trunk-Java8 - Build # 196 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/196/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 9468 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:08 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.102 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-25T14:27:02+00:00
[INFO] Final Memory: 55M/257M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:127 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 807149 bytes
Compression is 0.0%
Took 27 sec
Recording test results
Updating HDFS-8377
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #195

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/195/>

------------------------------------------
[...truncated 8391 lines...]
     [exec] 2015-05-24 14:24:37,055 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.datanode is not defined
     [exec] 2015-05-24 14:24:37,055 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
     [exec] 2015-05-24 14:24:37,055 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-24 14:24:37,056 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-24 14:24:37,057 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 48471
     [exec] 2015-05-24 14:24:37,057 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-24 14:24:37,409 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:48471
     [exec] 2015-05-24 14:24:37,529 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(162)) - Listening HTTP traffic on /127.0.0.1:45323
     [exec] 2015-05-24 14:24:37,530 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-24 14:24:37,530 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-24 14:24:37,545 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-24 14:24:37,545 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 42270
     [exec] 2015-05-24 14:24:37,552 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:42270
     [exec] 2015-05-24 14:24:37,561 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-24 14:24:37,564 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-24 14:24:37,574 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:34418 starting to offer service
     [exec] 2015-05-24 14:24:37,579 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-24 14:24:37,579 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 42270: starting
     [exec] 2015-05-24 14:24:37,809 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 2352@asf905.gq1.ygridcore.net
     [exec] 2015-05-24 14:24:37,809 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,809 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-24 14:24:37,835 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,835 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2084748531-67.195.81.149-1432477475599>
     [exec] 2015-05-24 14:24:37,836 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2084748531-67.195.81.149-1432477475599> is not formatted for BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,836 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-24 14:24:37,836 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-2084748531-67.195.81.149-1432477475599 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2084748531-67.195.81.149-1432477475599/current>
     [exec] 2015-05-24 14:24:37,838 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 2352@asf905.gq1.ygridcore.net
     [exec] 2015-05-24 14:24:37,838 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,838 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-24 14:24:37,853 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,853 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2084748531-67.195.81.149-1432477475599>
     [exec] 2015-05-24 14:24:37,853 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2084748531-67.195.81.149-1432477475599> is not formatted for BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,853 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-24 14:24:37,853 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-2084748531-67.195.81.149-1432477475599 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2084748531-67.195.81.149-1432477475599/current>
     [exec] 2015-05-24 14:24:37,855 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=377495776;bpid=BP-2084748531-67.195.81.149-1432477475599;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=377495776;c=0;bpid=BP-2084748531-67.195.81.149-1432477475599;dnuuid=null
     [exec] 2015-05-24 14:24:37,856 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID d507fd8f-f35b-41a4-b081-2fb018116f0e
     [exec] 2015-05-24 14:24:37,877 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-7c42c834-fc72-4e0c-865b-a09f3a5cb708
     [exec] 2015-05-24 14:24:37,877 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-24 14:24:37,877 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-cd783a9c-7ef7-4c9e-a3ee-8da46e5caede
     [exec] 2015-05-24 14:24:37,877 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-24 14:24:37,881 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-24 14:24:37,887 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432497941887 with interval 21600000
     [exec] 2015-05-24 14:24:37,888 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:37,889 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-2084748531-67.195.81.149-1432477475599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-24 14:24:37,889 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-2084748531-67.195.81.149-1432477475599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-24 14:24:37,902 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-2084748531-67.195.81.149-1432477475599 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 13ms
     [exec] 2015-05-24 14:24:37,903 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-2084748531-67.195.81.149-1432477475599 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 14ms
     [exec] 2015-05-24 14:24:37,903 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-2084748531-67.195.81.149-1432477475599: 15ms
     [exec] 2015-05-24 14:24:37,905 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-2084748531-67.195.81.149-1432477475599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-24 14:24:37,905 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-2084748531-67.195.81.149-1432477475599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-24 14:24:37,905 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2084748531-67.195.81.149-1432477475599/current/replicas> doesn't exist 
     [exec] 2015-05-24 14:24:37,905 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2084748531-67.195.81.149-1432477475599/current/replicas> doesn't exist 
     [exec] 2015-05-24 14:24:37,905 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-2084748531-67.195.81.149-1432477475599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-24 14:24:37,905 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-2084748531-67.195.81.149-1432477475599 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-24 14:24:37,906 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 3ms
     [exec] 2015-05-24 14:24:37,908 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e) service to localhost/127.0.0.1:34418 beginning handshake with NN
     [exec] 2015-05-24 14:24:37,921 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:43944, datanodeUuid=d507fd8f-f35b-41a4-b081-2fb018116f0e, infoPort=45323, infoSecurePort=0, ipcPort=42270, storageInfo=lv=-56;cid=testClusterID;nsid=377495776;c=0) storage d507fd8f-f35b-41a4-b081-2fb018116f0e
     [exec] 2015-05-24 14:24:37,922 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-24 14:24:37,923 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:43944
     [exec] 2015-05-24 14:24:37,928 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e) service to localhost/127.0.0.1:34418 successfully registered with NN
     [exec] 2015-05-24 14:24:37,929 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:34418 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-24 14:24:37,933 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2332)) - No heartbeat from DataNode: 127.0.0.1:43944
     [exec] 2015-05-24 14:24:37,933 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-24 14:24:37,943 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-24 14:24:37,943 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-7c42c834-fc72-4e0c-865b-a09f3a5cb708 for DN 127.0.0.1:43944
     [exec] 2015-05-24 14:24:37,944 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-cd783a9c-7ef7-4c9e-a3ee-8da46e5caede for DN 127.0.0.1:43944
     [exec] 2015-05-24 14:24:37,952 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e) service to localhost/127.0.0.1:34418 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-24 14:24:37,953 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e) service to localhost/127.0.0.1:34418
     [exec] 2015-05-24 14:24:37,970 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-7c42c834-fc72-4e0c-865b-a09f3a5cb708 from datanode d507fd8f-f35b-41a4-b081-2fb018116f0e
     [exec] 2015-05-24 14:24:37,971 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-7c42c834-fc72-4e0c-865b-a09f3a5cb708 node DatanodeRegistration(127.0.0.1:43944, datanodeUuid=d507fd8f-f35b-41a4-b081-2fb018116f0e, infoPort=45323, infoSecurePort=0, ipcPort=42270, storageInfo=lv=-56;cid=testClusterID;nsid=377495776;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs
     [exec] 2015-05-24 14:24:37,971 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-cd783a9c-7ef7-4c9e-a3ee-8da46e5caede from datanode d507fd8f-f35b-41a4-b081-2fb018116f0e
     [exec] 2015-05-24 14:24:37,971 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-cd783a9c-7ef7-4c9e-a3ee-8da46e5caede node DatanodeRegistration(127.0.0.1:43944, datanodeUuid=d507fd8f-f35b-41a4-b081-2fb018116f0e, infoPort=45323, infoSecurePort=0, ipcPort=42270, storageInfo=lv=-56;cid=testClusterID;nsid=377495776;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-24 14:24:37,987 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x3532f93575bf9d1c,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 4 msec to generate and 30 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-24 14:24:37,987 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:38,038 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-24 14:24:38,042 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-24 14:24:38,042 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-24 14:24:38,043 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-24 14:24:38,043 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-24 14:24:38,046 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-24 14:24:38,156 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 42270
     [exec] 2015-05-24 14:24:38,158 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 42270
     [exec] 2015-05-24 14:24:38,158 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-24 14:24:38,158 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e) service to localhost/127.0.0.1:34418 interrupted
     [exec] 2015-05-24 14:24:38,160 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e) service to localhost/127.0.0.1:34418
     [exec] 2015-05-24 14:24:38,263 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-2084748531-67.195.81.149-1432477475599 (Datanode Uuid d507fd8f-f35b-41a4-b081-2fb018116f0e)
     [exec] 2015-05-24 14:24:38,263 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-2084748531-67.195.81.149-1432477475599
     [exec] 2015-05-24 14:24:38,264 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-24 14:24:38,265 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-24 14:24:38,265 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-24 14:24:38,265 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-24 14:24:38,269 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-24 14:24:38,270 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-24 14:24:38,270 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-24 14:24:38,271 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-24 14:24:38,272 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-24 14:24:38,273 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 4 1 
     [exec] 2015-05-24 14:24:38,275 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-24 14:24:38,276 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-24 14:24:38,277 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 34418
     [exec] 2015-05-24 14:24:38,278 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 34418
     [exec] 2015-05-24 14:24:38,278 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-24 14:24:38,279 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-24 14:24:38,308 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-24 14:24:38,308 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-24 14:24:38,310 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-24 14:24:38,411 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-24 14:24:38,412 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-24 14:24:38,412 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.092 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.071 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-24T14:26:46+00:00
[INFO] Final Memory: 54M/164M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797647 bytes
Compression is 0.0%
Took 11 sec
Recording test results

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #194

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/194/changes>

Changes:

[ozawa] MAPREDUCE-6204. TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS.

[cmccabe] HADOOP-11927.  Fix "undefined reference to dlopen" error when compiling libhadooppipes (Xianyin Xin via Colin P. McCabe)

[xgong] YARN-3701. Isolating the error of generating a single app report when

[jianhe] YARN-3707. RM Web UI queue filter doesn't work. Contributed by Wangda Tan

------------------------------------------
[...truncated 8395 lines...]
     [exec] 2015-05-23 14:25:10,203 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 50648
     [exec] 2015-05-23 14:25:10,203 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-23 14:25:10,549 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:50648
     [exec] 2015-05-23 14:25:10,668 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(162)) - Listening HTTP traffic on /127.0.0.1:36004
     [exec] 2015-05-23 14:25:10,670 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-23 14:25:10,670 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-23 14:25:10,683 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-23 14:25:10,684 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 41023
     [exec] 2015-05-23 14:25:10,691 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:41023
     [exec] 2015-05-23 14:25:10,702 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-23 14:25:10,704 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-23 14:25:10,715 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:34420 starting to offer service
     [exec] 2015-05-23 14:25:10,723 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-23 14:25:10,723 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 41023: starting
     [exec] 2015-05-23 14:25:11,209 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 10029@asf903.gq1.ygridcore.net
     [exec] 2015-05-23 14:25:11,210 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,210 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-23 14:25:11,229 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,229 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1949022284-67.195.81.147-1432391108836>
     [exec] 2015-05-23 14:25:11,230 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1949022284-67.195.81.147-1432391108836> is not formatted for BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,230 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-23 14:25:11,230 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1949022284-67.195.81.147-1432391108836 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1949022284-67.195.81.147-1432391108836/current>
     [exec] 2015-05-23 14:25:11,232 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 10029@asf903.gq1.ygridcore.net
     [exec] 2015-05-23 14:25:11,232 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,233 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-23 14:25:11,247 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,247 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1949022284-67.195.81.147-1432391108836>
     [exec] 2015-05-23 14:25:11,248 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1949022284-67.195.81.147-1432391108836> is not formatted for BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,248 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-23 14:25:11,248 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1949022284-67.195.81.147-1432391108836 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1949022284-67.195.81.147-1432391108836/current>
     [exec] 2015-05-23 14:25:11,249 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=919961062;bpid=BP-1949022284-67.195.81.147-1432391108836;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=919961062;c=0;bpid=BP-1949022284-67.195.81.147-1432391108836;dnuuid=null
     [exec] 2015-05-23 14:25:11,251 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 6eca1b12-d89c-480e-a60e-1430c2660420
     [exec] 2015-05-23 14:25:11,271 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-604e5d49-1fef-44ff-82be-ebfa66105d65
     [exec] 2015-05-23 14:25:11,271 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-23 14:25:11,272 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-efddbaba-9f35-41f0-b563-614775074010
     [exec] 2015-05-23 14:25:11,272 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-23 14:25:11,275 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-23 14:25:11,281 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432402907281 with interval 21600000
     [exec] 2015-05-23 14:25:11,281 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,283 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1949022284-67.195.81.147-1432391108836 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-23 14:25:11,283 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1949022284-67.195.81.147-1432391108836 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-23 14:25:11,293 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1949022284-67.195.81.147-1432391108836 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 10ms
     [exec] 2015-05-23 14:25:11,293 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1949022284-67.195.81.147-1432391108836 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 10ms
     [exec] 2015-05-23 14:25:11,293 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-1949022284-67.195.81.147-1432391108836: 12ms
     [exec] 2015-05-23 14:25:11,294 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1949022284-67.195.81.147-1432391108836 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-23 14:25:11,294 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1949022284-67.195.81.147-1432391108836 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-23 14:25:11,294 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1949022284-67.195.81.147-1432391108836/current/replicas> doesn't exist 
     [exec] 2015-05-23 14:25:11,294 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1949022284-67.195.81.147-1432391108836/current/replicas> doesn't exist 
     [exec] 2015-05-23 14:25:11,295 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1949022284-67.195.81.147-1432391108836 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-23 14:25:11,295 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1949022284-67.195.81.147-1432391108836 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-23 14:25:11,295 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-23 14:25:11,297 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420) service to localhost/127.0.0.1:34420 beginning handshake with NN
     [exec] 2015-05-23 14:25:11,310 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:34374, datanodeUuid=6eca1b12-d89c-480e-a60e-1430c2660420, infoPort=36004, infoSecurePort=0, ipcPort=41023, storageInfo=lv=-56;cid=testClusterID;nsid=919961062;c=0) storage 6eca1b12-d89c-480e-a60e-1430c2660420
     [exec] 2015-05-23 14:25:11,310 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-23 14:25:11,311 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-23 14:25:11,311 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:34374
     [exec] 2015-05-23 14:25:11,312 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-23 14:25:11,317 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420) service to localhost/127.0.0.1:34420 successfully registered with NN
     [exec] 2015-05-23 14:25:11,317 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:34420 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-23 14:25:11,329 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-23 14:25:11,330 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-604e5d49-1fef-44ff-82be-ebfa66105d65 for DN 127.0.0.1:34374
     [exec] 2015-05-23 14:25:11,331 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-efddbaba-9f35-41f0-b563-614775074010 for DN 127.0.0.1:34374
     [exec] 2015-05-23 14:25:11,341 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420) service to localhost/127.0.0.1:34420 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-23 14:25:11,341 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420) service to localhost/127.0.0.1:34420
     [exec] 2015-05-23 14:25:11,353 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-604e5d49-1fef-44ff-82be-ebfa66105d65 from datanode 6eca1b12-d89c-480e-a60e-1430c2660420
     [exec] 2015-05-23 14:25:11,354 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-604e5d49-1fef-44ff-82be-ebfa66105d65 node DatanodeRegistration(127.0.0.1:34374, datanodeUuid=6eca1b12-d89c-480e-a60e-1430c2660420, infoPort=36004, infoSecurePort=0, ipcPort=41023, storageInfo=lv=-56;cid=testClusterID;nsid=919961062;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs
     [exec] 2015-05-23 14:25:11,354 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-efddbaba-9f35-41f0-b563-614775074010 from datanode 6eca1b12-d89c-480e-a60e-1430c2660420
     [exec] 2015-05-23 14:25:11,354 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-efddbaba-9f35-41f0-b563-614775074010 node DatanodeRegistration(127.0.0.1:34374, datanodeUuid=6eca1b12-d89c-480e-a60e-1430c2660420, infoPort=36004, infoSecurePort=0, ipcPort=41023, storageInfo=lv=-56;cid=testClusterID;nsid=919961062;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-23 14:25:11,370 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xf3b786bcfaec774,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 26 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-23 14:25:11,370 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,419 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-23 14:25:11,424 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-23 14:25:11,424 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-23 14:25:11,424 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-23 14:25:11,424 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-23 14:25:11,426 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-23 14:25:11,538 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 41023
     [exec] 2015-05-23 14:25:11,540 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 41023
     [exec] 2015-05-23 14:25:11,540 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420) service to localhost/127.0.0.1:34420 interrupted
     [exec] 2015-05-23 14:25:11,540 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420) service to localhost/127.0.0.1:34420
     [exec] 2015-05-23 14:25:11,540 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-23 14:25:11,643 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1949022284-67.195.81.147-1432391108836 (Datanode Uuid 6eca1b12-d89c-480e-a60e-1430c2660420)
     [exec] 2015-05-23 14:25:11,643 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-1949022284-67.195.81.147-1432391108836
     [exec] 2015-05-23 14:25:11,644 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-23 14:25:11,645 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-23 14:25:11,645 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-23 14:25:11,645 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-23 14:25:11,649 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-23 14:25:11,649 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-23 14:25:11,650 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-23 14:25:11,650 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-23 14:25:11,651 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-23 14:25:11,651 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 1 
     [exec] 2015-05-23 14:25:11,652 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-23 14:25:11,653 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-23 14:25:11,655 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 34420
     [exec] 2015-05-23 14:25:11,655 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 34420
     [exec] 2015-05-23 14:25:11,656 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-23 14:25:11,656 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-23 14:25:11,691 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-23 14:25:11,691 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-23 14:25:11,692 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-23 14:25:11,793 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-23 14:25:11,794 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-23 14:25:11,794 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:05 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.082 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-23T14:27:18+00:00
[INFO] Final Memory: 54M/157M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 810586 bytes
Compression is 0.0%
Took 20 sec
Recording test results
Updating YARN-3701
Updating YARN-3707
Updating MAPREDUCE-6204
Updating HADOOP-11927

Hadoop-Hdfs-trunk-Java8 - Build # 194 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/194/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8588 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [01:05 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.082 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-23T14:27:18+00:00
[INFO] Final Memory: 54M/157M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:127 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 810586 bytes
Compression is 0.0%
Took 20 sec
Recording test results
Updating YARN-3701
Updating YARN-3707
Updating MAPREDUCE-6204
Updating HADOOP-11927
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #193

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/193/changes>

Changes:

[aajisaka] YARN-3694. Fix dead link for TimelineServer REST API. Contributed by Jagadesh Kiran N.

[devaraj] YARN-3646. Applications are getting stuck some times in case of retry

[wheat9] HDFS-8421. Move startFile() and related functions into FSDirWriteFileOp. Contributed by Haohui Mai.

[xyao] HDFS-8451. DFSClient probe for encryption testing interprets empty URI property for enabled. Contributed by Steve Loughran.

[kasha] YARN-3675. FairScheduler: RM quits when node removal races with continuous-scheduling on the same node. (Anubhav Dhoot via kasha)

[jghoman] HADOOP-12016. Typo in FileSystem::listStatusIterator. Contributed by Arthur Vigil.

[vinodkv] YARN-3684. Changed ContainerExecutor's primary lifecycle methods to use a more extensible mechanism of context objects. Contributed by Sidharta Seethana.

[arp] HDFS-8454. Remove unnecessary throttling in TestDatanodeDeath. (Arpit Agarwal)

[aajisaka] HADOOP-12014. hadoop-config.cmd displays a wrong error message. Contributed by Kengo Seki.

[aajisaka] HADOOP-11955. Fix a typo in the cluster setup doc. Contributed by Yanjun Wang.

[aajisaka] HADOOP-11594. Improve the readability of site index of documentation. Contributed by Masatake Iwasaki.

[vinayakumarb] HDFS-8268. Port conflict log for data node server is not sufficient (Contributed by Mohammad Shahid Khan)

[junping_du] YARN-3594. WintuilsProcessStubExecutor.startStreamReader leaks streams. Contributed by Lars Francke.

[vinayakumarb] HADOOP-11743. maven doesn't clean all the site files (Contributed by ramtin)

------------------------------------------
[...truncated 8405 lines...]
     [exec] 2015-05-22 14:28:08,156 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-22 14:28:08,166 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:32836 starting to offer service
     [exec] 2015-05-22 14:28:08,172 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-22 14:28:08,172 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 47214: starting
     [exec] 2015-05-22 14:28:08,674 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 20815@asf905.gq1.ygridcore.net
     [exec] 2015-05-22 14:28:08,674 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,675 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-22 14:28:08,692 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,692 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-25527340-67.195.81.149-1432304886340>
     [exec] 2015-05-22 14:28:08,693 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-25527340-67.195.81.149-1432304886340> is not formatted for BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,693 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-22 14:28:08,693 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-25527340-67.195.81.149-1432304886340 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-25527340-67.195.81.149-1432304886340/current>
     [exec] 2015-05-22 14:28:08,695 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 20815@asf905.gq1.ygridcore.net
     [exec] 2015-05-22 14:28:08,695 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,695 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-22 14:28:08,709 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,709 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-25527340-67.195.81.149-1432304886340>
     [exec] 2015-05-22 14:28:08,709 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-25527340-67.195.81.149-1432304886340> is not formatted for BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,710 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-22 14:28:08,710 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-25527340-67.195.81.149-1432304886340 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-25527340-67.195.81.149-1432304886340/current>
     [exec] 2015-05-22 14:28:08,711 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=1913296205;bpid=BP-25527340-67.195.81.149-1432304886340;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=1913296205;c=0;bpid=BP-25527340-67.195.81.149-1432304886340;dnuuid=null
     [exec] 2015-05-22 14:28:08,713 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 687b3464-af72-4db0-992c-baf74ecefdb6
     [exec] 2015-05-22 14:28:08,733 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-2f17cbd4-0da2-41b9-a48f-7bc7117e4521
     [exec] 2015-05-22 14:28:08,733 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-22 14:28:08,733 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-e99ecc2a-7e9b-4242-ba20-8889042f05a4
     [exec] 2015-05-22 14:28:08,733 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-22 14:28:08,736 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-22 14:28:08,743 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432314171743 with interval 21600000
     [exec] 2015-05-22 14:28:08,744 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,744 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-25527340-67.195.81.149-1432304886340 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-22 14:28:08,744 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-25527340-67.195.81.149-1432304886340 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-22 14:28:08,759 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-25527340-67.195.81.149-1432304886340 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 14ms
     [exec] 2015-05-22 14:28:08,763 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-25527340-67.195.81.149-1432304886340 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 18ms
     [exec] 2015-05-22 14:28:08,763 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-25527340-67.195.81.149-1432304886340: 19ms
     [exec] 2015-05-22 14:28:08,766 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-25527340-67.195.81.149-1432304886340 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-22 14:28:08,766 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-25527340-67.195.81.149-1432304886340/current/replicas> doesn't exist 
     [exec] 2015-05-22 14:28:08,766 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-25527340-67.195.81.149-1432304886340 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-22 14:28:08,766 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-25527340-67.195.81.149-1432304886340 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-22 14:28:08,767 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-25527340-67.195.81.149-1432304886340/current/replicas> doesn't exist 
     [exec] 2015-05-22 14:28:08,767 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-25527340-67.195.81.149-1432304886340 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-22 14:28:08,767 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 3ms
     [exec] 2015-05-22 14:28:08,770 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6) service to localhost/127.0.0.1:32836 beginning handshake with NN
     [exec] 2015-05-22 14:28:08,773 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-22 14:28:08,773 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-22 14:28:08,779 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:57667, datanodeUuid=687b3464-af72-4db0-992c-baf74ecefdb6, infoPort=49040, infoSecurePort=0, ipcPort=47214, storageInfo=lv=-56;cid=testClusterID;nsid=1913296205;c=0) storage 687b3464-af72-4db0-992c-baf74ecefdb6
     [exec] 2015-05-22 14:28:08,780 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-22 14:28:08,781 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:57667
     [exec] 2015-05-22 14:28:08,785 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6) service to localhost/127.0.0.1:32836 successfully registered with NN
     [exec] 2015-05-22 14:28:08,785 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:32836 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-22 14:28:08,795 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-22 14:28:08,795 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-2f17cbd4-0da2-41b9-a48f-7bc7117e4521 for DN 127.0.0.1:57667
     [exec] 2015-05-22 14:28:08,796 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-e99ecc2a-7e9b-4242-ba20-8889042f05a4 for DN 127.0.0.1:57667
     [exec] 2015-05-22 14:28:08,805 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6) service to localhost/127.0.0.1:32836 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-22 14:28:08,805 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6) service to localhost/127.0.0.1:32836
     [exec] 2015-05-22 14:28:08,819 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-2f17cbd4-0da2-41b9-a48f-7bc7117e4521 from datanode 687b3464-af72-4db0-992c-baf74ecefdb6
     [exec] 2015-05-22 14:28:08,820 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-2f17cbd4-0da2-41b9-a48f-7bc7117e4521 node DatanodeRegistration(127.0.0.1:57667, datanodeUuid=687b3464-af72-4db0-992c-baf74ecefdb6, infoPort=49040, infoSecurePort=0, ipcPort=47214, storageInfo=lv=-56;cid=testClusterID;nsid=1913296205;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-22 14:28:08,820 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-e99ecc2a-7e9b-4242-ba20-8889042f05a4 from datanode 687b3464-af72-4db0-992c-baf74ecefdb6
     [exec] 2015-05-22 14:28:08,820 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-e99ecc2a-7e9b-4242-ba20-8889042f05a4 node DatanodeRegistration(127.0.0.1:57667, datanodeUuid=687b3464-af72-4db0-992c-baf74ecefdb6, infoPort=49040, infoSecurePort=0, ipcPort=47214, storageInfo=lv=-56;cid=testClusterID;nsid=1913296205;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-22 14:28:08,836 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xd8a1fc17cbf07700,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 27 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-22 14:28:08,836 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:08,882 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-22 14:28:08,887 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-22 14:28:08,887 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-22 14:28:08,887 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-22 14:28:08,887 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-22 14:28:08,890 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-22 14:28:09,003 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 47214
     [exec] 2015-05-22 14:28:09,004 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 47214
     [exec] 2015-05-22 14:28:09,004 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6) service to localhost/127.0.0.1:32836 interrupted
     [exec] 2015-05-22 14:28:09,004 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-22 14:28:09,004 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6) service to localhost/127.0.0.1:32836
     [exec] 2015-05-22 14:28:09,108 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-25527340-67.195.81.149-1432304886340 (Datanode Uuid 687b3464-af72-4db0-992c-baf74ecefdb6)
     [exec] 2015-05-22 14:28:09,108 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-25527340-67.195.81.149-1432304886340
     [exec] 2015-05-22 14:28:09,109 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-22 14:28:09,109 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-22 14:28:09,110 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-22 14:28:09,110 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-22 14:28:09,114 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-22 14:28:09,114 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-22 14:28:09,115 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-22 14:28:09,115 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-22 14:28:09,116 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-22 14:28:09,118 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 0 
     [exec] 2015-05-22 14:28:09,120 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-22 14:28:09,121 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-22 14:28:09,122 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 32836
     [exec] 2015-05-22 14:28:09,123 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 32836
     [exec] 2015-05-22 14:28:09,123 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-22 14:28:09,123 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-22 14:28:09,157 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-22 14:28:09,157 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-22 14:28:09,159 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-22 14:28:09,260 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-22 14:28:09,261 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-22 14:28:09,261 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.247 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.045 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-05-22T14:30:18+00:00
[INFO] Final Memory: 55M/253M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797637 bytes
Compression is 0.0%
Took 28 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #192

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/192/changes>

Changes:

[wangda] Move YARN-2918 from 2.8.0 to 2.7.1

[xgong] YARN-3681. yarn cmd says "could not find main class 'queue'" in windows.

[jianhe] YARN-3609. Load node labels from storage inside RM serviceStart. Contributed by Wangda Tan

[jianhe] YARN-3654. ContainerLogsPage web UI should not have meta-refresh. Contributed by Xuan Gong

[wheat9] HADOOP-11772. RPC Invoker relies on static ClientCache which has synchronized(this) blocks. Contributed by Haohui Mai.

[aajisaka] HDFS-4383. Document the lease limits. Contributed by Arshad Mohammad.

[aajisaka] HADOOP-10366. Add whitespaces between classes for values in core-default.xml to fit better in browser. Contributed by kanaka kumar avvaru.

------------------------------------------
[...truncated 7286 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.642 sec - in org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.43 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.258 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.226 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.457 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.087 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.962 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.3 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.924 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.678 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.488 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.991 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.182 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.775 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.817 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.1 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.589 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.376 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.082 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.001 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.528 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.979 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.161 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.666 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.759 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.454 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.328 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.832 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.262 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.54 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.936 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.266 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.666 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.924 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.941 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.169 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.959 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.805 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.978 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.887 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.366 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:426->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:445 » EOF
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » Connect Call From asf905.gq1.yg...

Tests run: 3437, Failures: 1, Errors: 4, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.261 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.077 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:50 h
[INFO] Finished at: 2015-05-21T14:31:28+00:00
[INFO] Final Memory: 52M/229M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797829 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating HDFS-4383
Updating HADOOP-10366
Updating HADOOP-11772
Updating YARN-2918
Updating YARN-3654
Updating YARN-3609
Updating YARN-3681

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #191

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/191/changes>

Changes:

[kihwal] HDFS-8131. Implement a space balanced block placement policy. Contributed by Liu Shaohui.

[xgong] YARN-3601. Fix UT TestRMFailover.testRMWebAppRedirect. Contributed by Weiwei Yang

[raviprak] YARN-3302. TestDockerContainerExecutor should run automatically if it can detect docker in the usual place (Ravindra Kumar Naik via raviprak)

[cmccabe] HADOOP-11970. Replace uses of ThreadLocal<Random> with JDK7 ThreadLocalRandom (Sean Busbey via Colin P. McCabe)

[kihwal] HDFS-8404. Pending block replication can get stuck using older genstamp. Contributed by Nathan Roberts.

[junping_du] Moving MAPREDUCE-6361 to 2.7.1 CHANGES.txt

[Arun Suresh] HADOOP-11973. Ensure ZkDelegationTokenSecretManager namespace znodes get created with ACLs. (Gregory Chanan via asuresh)

[cnauroth] HADOOP-11963. Metrics documentation for FSNamesystem misspells PendingDataNodeMessageCount. Contributed by Anu Engineer.

[jianhe] YARN-2821. Fixed a problem that DistributedShell AM may hang if restarted. Contributed by Varun Vasudev

[aw] HADOOP-12000. cannot use --java-home in test-patch (aw)

[wangda] YARN-3565. NodeHeartbeatRequest/RegisterNodeManagerRequest should use NodeLabel object instead of String. (Naganarasimha G R via wangda)

[wangda] YARN-3583. Support of NodeLabel object instead of plain String in YarnClient side. (Sunil G via wangda)

[ozawa] YARN-3677. Fix findbugs warnings in yarn-server-resourcemanager. Contributed by Vinod Kumar Vavilapalli.

[wheat9] HADOOP-11995. Make jetty version configurable from the maven command line. Contributed by Sriharsha Devineni.

[aajisaka] HADOOP-11698. Remove DistCpV1 and Logalyzer. Contributed by Brahma Reddy Battula.

------------------------------------------
[...truncated 8401 lines...]
     [exec] 2015-05-20 14:54:01,909 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-20 14:54:01,911 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-20 14:54:01,921 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:44516 starting to offer service
     [exec] 2015-05-20 14:54:01,929 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-20 14:54:01,929 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 56237: starting
     [exec] 2015-05-20 14:54:02,183 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 20216@asf905.gq1.ygridcore.net
     [exec] 2015-05-20 14:54:02,184 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,184 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-20 14:54:02,204 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,204 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1718295274-67.195.81.149-1432133639955>
     [exec] 2015-05-20 14:54:02,204 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1718295274-67.195.81.149-1432133639955> is not formatted for BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,204 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-20 14:54:02,204 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1718295274-67.195.81.149-1432133639955 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1718295274-67.195.81.149-1432133639955/current>
     [exec] 2015-05-20 14:54:02,206 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 20216@asf905.gq1.ygridcore.net
     [exec] 2015-05-20 14:54:02,207 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,207 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-20 14:54:02,223 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,223 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1718295274-67.195.81.149-1432133639955>
     [exec] 2015-05-20 14:54:02,224 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1718295274-67.195.81.149-1432133639955> is not formatted for BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,224 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-20 14:54:02,224 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1718295274-67.195.81.149-1432133639955 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1718295274-67.195.81.149-1432133639955/current>
     [exec] 2015-05-20 14:54:02,225 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=8655174;bpid=BP-1718295274-67.195.81.149-1432133639955;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=8655174;c=0;bpid=BP-1718295274-67.195.81.149-1432133639955;dnuuid=null
     [exec] 2015-05-20 14:54:02,227 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 3946e6b1-482b-4bae-9c70-4df44fa6e173
     [exec] 2015-05-20 14:54:02,248 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-a7872c5a-683a-4769-8bff-4c1682e8ec25
     [exec] 2015-05-20 14:54:02,248 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-20 14:54:02,248 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-bb4ebb37-a7af-44a5-aa75-d9e9c3943f23
     [exec] 2015-05-20 14:54:02,248 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-20 14:54:02,251 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-20 14:54:02,258 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432150855258 with interval 21600000
     [exec] 2015-05-20 14:54:02,258 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,259 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1718295274-67.195.81.149-1432133639955 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-20 14:54:02,259 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1718295274-67.195.81.149-1432133639955 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-20 14:54:02,270 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1718295274-67.195.81.149-1432133639955 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 11ms
     [exec] 2015-05-20 14:54:02,270 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1718295274-67.195.81.149-1432133639955 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 11ms
     [exec] 2015-05-20 14:54:02,270 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-1718295274-67.195.81.149-1432133639955: 11ms
     [exec] 2015-05-20 14:54:02,271 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1718295274-67.195.81.149-1432133639955 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-20 14:54:02,271 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1718295274-67.195.81.149-1432133639955 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-20 14:54:02,271 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1718295274-67.195.81.149-1432133639955/current/replicas> doesn't exist 
     [exec] 2015-05-20 14:54:02,271 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1718295274-67.195.81.149-1432133639955/current/replicas> doesn't exist 
     [exec] 2015-05-20 14:54:02,271 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1718295274-67.195.81.149-1432133639955 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-20 14:54:02,271 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1718295274-67.195.81.149-1432133639955 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-20 14:54:02,272 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-20 14:54:02,273 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173) service to localhost/127.0.0.1:44516 beginning handshake with NN
     [exec] 2015-05-20 14:54:02,282 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:48741, datanodeUuid=3946e6b1-482b-4bae-9c70-4df44fa6e173, infoPort=49566, infoSecurePort=0, ipcPort=56237, storageInfo=lv=-56;cid=testClusterID;nsid=8655174;c=0) storage 3946e6b1-482b-4bae-9c70-4df44fa6e173
     [exec] 2015-05-20 14:54:02,283 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-20 14:54:02,284 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:48741
     [exec] 2015-05-20 14:54:02,290 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173) service to localhost/127.0.0.1:44516 successfully registered with NN
     [exec] 2015-05-20 14:54:02,291 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:44516 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-20 14:54:02,306 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-20 14:54:02,306 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-a7872c5a-683a-4769-8bff-4c1682e8ec25 for DN 127.0.0.1:48741
     [exec] 2015-05-20 14:54:02,308 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-bb4ebb37-a7af-44a5-aa75-d9e9c3943f23 for DN 127.0.0.1:48741
     [exec] 2015-05-20 14:54:02,319 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-20 14:54:02,321 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173) service to localhost/127.0.0.1:44516 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-20 14:54:02,322 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173) service to localhost/127.0.0.1:44516
     [exec] 2015-05-20 14:54:02,325 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-20 14:54:02,325 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-20 14:54:02,325 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-20 14:54:02,325 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-20 14:54:02,327 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-20 14:54:02,336 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-bb4ebb37-a7af-44a5-aa75-d9e9c3943f23 from datanode 3946e6b1-482b-4bae-9c70-4df44fa6e173
     [exec] 2015-05-20 14:54:02,336 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-bb4ebb37-a7af-44a5-aa75-d9e9c3943f23 node DatanodeRegistration(127.0.0.1:48741, datanodeUuid=3946e6b1-482b-4bae-9c70-4df44fa6e173, infoPort=49566, infoSecurePort=0, ipcPort=56237, storageInfo=lv=-56;cid=testClusterID;nsid=8655174;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-20 14:54:02,336 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-a7872c5a-683a-4769-8bff-4c1682e8ec25 from datanode 3946e6b1-482b-4bae-9c70-4df44fa6e173
     [exec] 2015-05-20 14:54:02,337 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-a7872c5a-683a-4769-8bff-4c1682e8ec25 node DatanodeRegistration(127.0.0.1:48741, datanodeUuid=3946e6b1-482b-4bae-9c70-4df44fa6e173, infoPort=49566, infoSecurePort=0, ipcPort=56237, storageInfo=lv=-56;cid=testClusterID;nsid=8655174;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs
     [exec] 2015-05-20 14:54:02,352 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xd47f015b080316a,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 4 msec to generate and 26 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-20 14:54:02,352 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,438 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 56237
     [exec] 2015-05-20 14:54:02,438 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 56237
     [exec] 2015-05-20 14:54:02,438 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-20 14:54:02,438 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173) service to localhost/127.0.0.1:44516 interrupted
     [exec] 2015-05-20 14:54:02,439 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173) service to localhost/127.0.0.1:44516
     [exec] 2015-05-20 14:54:02,540 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1718295274-67.195.81.149-1432133639955 (Datanode Uuid 3946e6b1-482b-4bae-9c70-4df44fa6e173)
     [exec] 2015-05-20 14:54:02,540 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-1718295274-67.195.81.149-1432133639955
     [exec] 2015-05-20 14:54:02,541 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-20 14:54:02,542 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-20 14:54:02,542 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-20 14:54:02,542 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-20 14:54:02,547 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-20 14:54:02,547 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-20 14:54:02,548 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-20 14:54:02,548 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-20 14:54:02,549 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 3 1 
     [exec] 2015-05-20 14:54:02,549 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-20 14:54:02,550 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-20 14:54:02,552 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-20 14:54:02,553 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 44516
     [exec] 2015-05-20 14:54:02,554 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 44516
     [exec] 2015-05-20 14:54:02,555 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-20 14:54:02,555 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-20 14:54:02,583 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-20 14:54:02,584 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state
     [exec] 2015-05-20 14:54:02,585 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-20 14:54:02,686 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-20 14:54:02,687 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-20 14:54:02,687 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
     [java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.488 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.046 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-20T14:56:08+00:00
[INFO] Final Memory: 54M/177M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:127 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797913 bytes
Compression is 0.0%
Took 33 sec
Recording test results
Updating HADOOP-11698
Updating HADOOP-11973
Updating YARN-3583
Updating YARN-3601
Updating HADOOP-11995
Updating HADOOP-12000
Updating HADOOP-11970
Updating YARN-3565
Updating YARN-2821
Updating MAPREDUCE-6361
Updating HDFS-8404
Updating YARN-3302
Updating HDFS-8131
Updating YARN-3677
Updating HADOOP-11963

Hadoop-Hdfs-trunk-Java8 - Build # 191 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/191/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8594 lines...]
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.488 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.046 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-20T14:56:08+00:00
[INFO] Final Memory: 54M/177M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:127 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797913 bytes
Compression is 0.0%
Took 33 sec
Recording test results
Updating HADOOP-11698
Updating HADOOP-11973
Updating YARN-3583
Updating YARN-3601
Updating HADOOP-11995
Updating HADOOP-12000
Updating HADOOP-11970
Updating YARN-3565
Updating YARN-2821
Updating MAPREDUCE-6361
Updating HDFS-8404
Updating YARN-3302
Updating HDFS-8131
Updating YARN-3677
Updating HADOOP-11963
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed