You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2015/09/11 20:03:52 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #358

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/358/changes>

Changes:

[stevel] HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via stevel)

------------------------------------------
[...truncated 7189 lines...]
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.357 sec - in org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.057 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.354 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLog
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.124 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.47 sec - in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.158 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsck
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.958 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.189 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.023 sec - in org.apache.hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.959 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.824 sec - in org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.114 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.028 sec - in org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.965 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.416 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestClusterId
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.667 sec - in org.apache.hadoop.hdfs.server.namenode.TestClusterId
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.026 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.35 sec - in org.apache.hadoop.hdfs.server.namenode.TestINodeFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.275 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.611 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec - in org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.41 sec - in org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.346 sec - in org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.444 sec - in org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeInitStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.657 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeInitStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.695 sec - in org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.954 sec - in org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.526 sec - in org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.157 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.747 sec - in org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.extdataset.TestExternalDataset
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.133 sec - in org.apache.hadoop.hdfs.server.datanode.extdataset.TestExternalDataset
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestTriggerBlockReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.505 sec - in org.apache.hadoop.hdfs.server.datanode.TestTriggerBlockReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestHSync
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.283 sec - in org.apache.hadoop.hdfs.server.datanode.TestHSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestParameterParser
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.568 sec - in org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestParameterParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.475 sec - in org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestHdfsServerConstants
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in org.apache.hadoop.hdfs.server.datanode.TestHdfsServerConstants
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.037 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.226 sec - in org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestAvailableSpaceVolumeChoosingPolicy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.972 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.TestAvailableSpaceVolumeChoosingPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.332 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.266 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.73 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.06 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.036 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.166 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.107 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.94 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.836 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.407 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.931 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.79 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter

Results :

Tests run: 2680, Failures: 0, Errors: 0, Skipped: 15

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:16 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:55 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.052 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:58 h
[INFO] Finished at: 2015-09-11T18:03:34+00:00
[INFO] Final Memory: 75M/792M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter6332055089666442513.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5097127523017125452tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1707383158986727830952tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5369256 bytes
Compression is 0.0%
Took 7.7 sec
Recording test results
Updating HADOOP-12324

Hadoop-Hdfs-trunk-Java8 - Build # 359 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/359/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8010 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:16 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.065 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-09-11T21:33:07+00:00
[INFO] Final Memory: 55M/543M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5378037 bytes
Compression is 0.0%
Took 9.2 sec
Recording test results
Updating HDFS-9027
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
org/apache/hadoop/conf/Configuration$Resource

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration$Resource
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.get(Configuration.java:1043)
	at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1093)
	at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1313)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.<clinit>(TestLeaseRecovery2.java:77)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Could not initialize class org.apache.hadoop.hdfs.TestLeaseRecovery2

Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.TestLeaseRecovery2
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 50
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
] expected:<0> but was:<1>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Hadoop-Hdfs-trunk-Java8 - Build # 361 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/361/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8163 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:32 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.061 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:54 h
[INFO] Finished at: 2015-09-12T10:43:40+00:00
[INFO] Final Memory: 55M/564M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5371555 bytes
Compression is 0.0%
Took 8.8 sec
Recording test results
Updating HDFS-9042
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
13 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack

Error Message:

BlockCollection$$EnhancerByMockitoWithCGLIB$$62cb02d5 cannot be returned by isRunning()
isRunning() should return boolean

Stack Trace:
org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
BlockCollection$$EnhancerByMockitoWithCGLIB$$62cb02d5 cannot be returned by isRunning()
isRunning() should return boolean
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:443)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSingleRackClusterIsSufficientlyReplicated(TestBlockManager.java:376)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack(TestBlockManager.java:368)


FAILED:  org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1255 ms

Stack Trace:
java.lang.AssertionError: write timedout too late in 1255 ms
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1043)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1809)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
End of File Exception between local host is: "asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: "localhost":56028; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: "localhost":56028; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1449)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy25.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-1
  target length: 50
  current item: 45
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 45
	 done: false
] expected:<0> but was:<1>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Hadoop-Hdfs-trunk-Java8 - Build # 365 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/365/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8105 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:27 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:54 h
[INFO] Finished at: 2015-09-13T16:43:29+00:00
[INFO] Final Memory: 55M/491M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5285249855051866221.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire4220397878488451621tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4114397840772839427134tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5372697 bytes
Compression is 0.0%
Took 6.1 sec
Recording test results
Updating HADOOP-12087
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount

Error Message:
Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 after 20000 msec.  Last counts: live = 2, excess = 0, corrupt = 0

Stack Trace:
java.util.concurrent.TimeoutException: Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 after 20000 msec.  Last counts: live = 2, excess = 0, corrupt = 0
	at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:156)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:130)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger.testDisableMetricsLogger

Error Message:
Port in use: 0.0.0.0:46030

Stack Trace:
java.net.BindException: Port in use: 0.0.0.0:46030
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:414)
	at sun.nio.ch.Net.bind(Net.java:406)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
	at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882)
	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:839)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:663)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger$TestNameNode.<init>(TestNameNodeMetricsLogger.java:142)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger.makeNameNode(TestNameNodeMetricsLogger.java:118)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger.testDisableMetricsLogger(TestNameNodeMetricsLogger.java:66)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 24
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 43
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 42
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 24
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 43
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 42
	 done: false
] expected:<0> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Hadoop-Hdfs-trunk-Java8 - Build # 370 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/370/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8248 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:13 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:52 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.047 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:55 h
[INFO] Finished at: 2015-09-15T03:27:48+00:00
[INFO] Final Memory: 55M/654M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5376562 bytes
Compression is 0.0%
Took 6.8 sec
Recording test results
Updating HDFS-8829
Updating YARN-4151
Updating HDFS-8996
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
24 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestAclCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestAclCLI.execute(TestAclCLI.java:76)


REGRESSION:  org.apache.hadoop.cli.TestAclCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestAclCLI.tearDown(TestAclCLI.java:49)


REGRESSION:  org.apache.hadoop.cli.TestCacheAdminCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestCacheAdminCLI.execute(TestCacheAdminCLI.java:134)


REGRESSION:  org.apache.hadoop.cli.TestCacheAdminCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestCacheAdminCLI.tearDown(TestCacheAdminCLI.java:84)


REGRESSION:  org.apache.hadoop.cli.TestCryptoAdminCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestCryptoAdminCLI.execute(TestCryptoAdminCLI.java:163)


REGRESSION:  org.apache.hadoop.cli.TestCryptoAdminCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestCryptoAdminCLI.tearDown(TestCryptoAdminCLI.java:94)


REGRESSION:  org.apache.hadoop.cli.TestDeleteCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestDeleteCLI.execute(TestDeleteCLI.java:84)


REGRESSION:  org.apache.hadoop.cli.TestDeleteCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestDeleteCLI.tearDown(TestDeleteCLI.java:66)


REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestHDFSCLI.execute(TestHDFSCLI.java:98)


REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:85)


REGRESSION:  org.apache.hadoop.cli.TestXAttrCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestXAttrCLI.execute(TestXAttrCLI.java:90)


REGRESSION:  org.apache.hadoop.cli.TestXAttrCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestXAttrCLI.tearDown(TestXAttrCLI.java:75)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1809)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
End of File Exception between local host is: "asf906.gq1.ygridcore.net/67.195.81.150"; destination host is: "localhost":35304; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf906.gq1.ygridcore.net/67.195.81.150"; destination host is: "localhost":35304; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1444)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy22.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1098)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:993)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


REGRESSION:  org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure$SlowWriter.checkReplication(TestReplaceDatanodeOnFailure.java:235)
	at org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure(TestReplaceDatanodeOnFailure.java:154)


REGRESSION:  org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack

Error Message:

BlockCollection$$EnhancerByMockitoWithCGLIB$$f87a9b9d cannot be returned by isRunning()
isRunning() should return boolean

Stack Trace:
org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
BlockCollection$$EnhancerByMockitoWithCGLIB$$f87a9b9d cannot be returned by isRunning()
isRunning() should return boolean
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:443)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSingleRackClusterIsSufficientlyReplicated(TestBlockManager.java:376)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack(TestBlockManager.java:368)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck.testHaFsck

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1871)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:850)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:478)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:437)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck.testHaFsck(TestHAFsck.java:61)


REGRESSION:  org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2.listStatusReturnsAsExpected

Error Message:
Unable to load OAuth2 connection factory.

Stack Trace:
java.io.IOException: Unable to load OAuth2 connection factory.
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:131)
	at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:164)
	at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.<init>(ReloadingX509TrustManager.java:81)
	at org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:215)
	at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131)
	at org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:135)
	at org.apache.hadoop.hdfs.web.URLConnectionFactory.newOAuth2URLConnectionFactory(URLConnectionFactory.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:159)
	at org.apache.hadoop.hdfs.web.TestWebHDFSOAuth2.listStatusReturnsAsExpected(TestWebHDFSOAuth2.java:147)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)



Hadoop-Hdfs-trunk-Java8 - Build # 371 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/371/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7480 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:04 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.084 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:08 h
[INFO] Finished at: 2015-09-15T07:28:57+00:00
[INFO] Final Memory: 81M/881M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5494757301808925737.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire3039751869138466959tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1495047581774492725815tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5378738 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-9065
Updating HDFS-9010
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-2
  target length: 50
  current item: 47
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 47
	 done: false
] expected:<0> but was:<1>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1812)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:957)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1812)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:957)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1781)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1816)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:957)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
End of File Exception between local host is: "asf907.gq1.ygridcore.net/67.195.81.151"; destination host is: "localhost":38420; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf907.gq1.ygridcore.net/67.195.81.151"; destination host is: "localhost":38420; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1449)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy25.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #371

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/371/changes>

Changes:

[wheat9] HDFS-9010. Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by Mingliang Liu.

[wheat9] HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN Web UI. Contributed by Daniel Templeton.

------------------------------------------
[...truncated 7287 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.141 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider
Killed
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.316 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.09 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.307 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.772 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.977 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.528 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.18 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.192 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.305 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.467 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.787 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.26 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.22 sec - in org.apache.hadoop.hdfs.server.namenode.TestXAttrFeature
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.553 sec - in org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.647 sec - in org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.594 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
Killed
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.233 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.781 sec - in org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.646 sec - in org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.457 sec - in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.54 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeleteRace
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.341 sec - in org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.442 sec - in org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.055 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.846 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAclTransformation
Tests run: 55, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.322 sec - in org.apache.hadoop.hdfs.server.namenode.TestAclTransformation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.784 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.782 sec - in org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.738 sec - in org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.236 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.01 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.35 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.308 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.777 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNode
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.765 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.48 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFSOutputSummer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.223 sec - in org.apache.hadoop.hdfs.TestFSOutputSummer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.102 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.733 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.336 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.97 sec - in org.apache.hadoop.hdfs.TestDatanodeRegistration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPread

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 47
	 done: false
] expected:<0> but was:<1>

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » EOF End of File Exception betwe...

Tests run: 1807, Failures: 6, Errors: 3, Skipped: 3

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:04 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.084 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:08 h
[INFO] Finished at: 2015-09-15T07:28:57+00:00
[INFO] Final Memory: 81M/881M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5494757301808925737.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire3039751869138466959tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1495047581774492725815tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5378738 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-9065
Updating HDFS-9010

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #370

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/370/changes>

Changes:

[cmccabe] HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang via Colin P. McCabe)

[wangda] YARN-4151. Fix findbugs errors in hadoop-yarn-server-common module. (Meng Ding via wangda)

[cmccabe] HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via Colin P. McCabe)

------------------------------------------
[...truncated 8055 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.029 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.579 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.757 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.66 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.043 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.068 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.857 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.074 sec - in org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.056 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.526 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.452 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.38 sec - in org.apache.hadoop.net.TestNetworkTopology
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 6.456 sec <<< FAILURE! - in org.apache.hadoop.cli.TestCacheAdminCLI
testAll(org.apache.hadoop.cli.TestCacheAdminCLI)  Time elapsed: 6.29 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestCacheAdminCLI.execute(TestCacheAdminCLI.java:134)

testAll(org.apache.hadoop.cli.TestCacheAdminCLI)  Time elapsed: 6.291 sec  <<< FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestCacheAdminCLI.tearDown(TestCacheAdminCLI.java:84)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 6.384 sec <<< FAILURE! - in org.apache.hadoop.cli.TestCryptoAdminCLI
testAll(org.apache.hadoop.cli.TestCryptoAdminCLI)  Time elapsed: 6.195 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestCryptoAdminCLI.execute(TestCryptoAdminCLI.java:163)

testAll(org.apache.hadoop.cli.TestCryptoAdminCLI)  Time elapsed: 6.196 sec  <<< FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestCryptoAdminCLI.tearDown(TestCryptoAdminCLI.java:94)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 5.969 sec <<< FAILURE! - in org.apache.hadoop.cli.TestDeleteCLI
testAll(org.apache.hadoop.cli.TestDeleteCLI)  Time elapsed: 5.791 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestDeleteCLI.execute(TestDeleteCLI.java:84)

testAll(org.apache.hadoop.cli.TestDeleteCLI)  Time elapsed: 5.792 sec  <<< FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestDeleteCLI.tearDown(TestDeleteCLI.java:66)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 3.837 sec <<< FAILURE! - in org.apache.hadoop.cli.TestAclCLI
testAll(org.apache.hadoop.cli.TestAclCLI)  Time elapsed: 3.66 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestAclCLI.execute(TestAclCLI.java:76)

testAll(org.apache.hadoop.cli.TestAclCLI)  Time elapsed: 3.661 sec  <<< FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestAclCLI.tearDown(TestAclCLI.java:49)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 6.182 sec <<< FAILURE! - in org.apache.hadoop.cli.TestXAttrCLI
testAll(org.apache.hadoop.cli.TestXAttrCLI)  Time elapsed: 5.986 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestXAttrCLI.execute(TestXAttrCLI.java:90)

testAll(org.apache.hadoop.cli.TestXAttrCLI)  Time elapsed: 5.986 sec  <<< FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestXAttrCLI.tearDown(TestXAttrCLI.java:75)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 9.68 sec <<< FAILURE! - in org.apache.hadoop.cli.TestHDFSCLI
testAll(org.apache.hadoop.cli.TestHDFSCLI)  Time elapsed: 9.498 sec  <<< ERROR!
java.lang.NoSuchMethodError: org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
	at org.apache.hadoop.cli.TestHDFSCLI.execute(TestHDFSCLI.java:98)

testAll(org.apache.hadoop.cli.TestHDFSCLI)  Time elapsed: 9.499 sec  <<< FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:85)


Results :

Failed tests: 
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure:154 expected:<3> but was:<2>
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit
  TestCacheAdminCLI.tearDown:84->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263 One of the tests failed. See the Detailed results to identify the command that failed
  TestCryptoAdminCLI.tearDown:94->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263 One of the tests failed. See the Detailed results to identify the command that failed
  TestDeleteCLI.tearDown:66->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263 One of the tests failed. See the Detailed results to identify the command that failed
  TestAclCLI.tearDown:49->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263 One of the tests failed. See the Detailed results to identify the command that failed
  TestXAttrCLI.tearDown:75->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263 One of the tests failed. See the Detailed results to identify the command that failed
  TestHDFSCLI.tearDown:85->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263 One of the tests failed. See the Detailed results to identify the command that failed

Tests in error: 
  TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack:368->doTestSingleRackClusterIsSufficientlyReplicated:376->addBlockOnNodes:443 WrongTypeOfReturnValue
  TestHAFsck.testHaFsck:61 » NoClassDefFound org/apache/hadoop/util/ShutdownHook...
  TestWebHDFSOAuth2.listStatusReturnsAsExpected:147 » IO Unable to load OAuth2 c...
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » EOF End of File Exception betwe...
  TestCacheAdminCLI.testAll:140->CLITestHelper.testAll:319->execute:134 NoSuchMethod
  TestCryptoAdminCLI.testAll:169->CLITestHelper.testAll:319->execute:163 NoSuchMethod
  TestDeleteCLI.testAll:90->CLITestHelper.testAll:319->execute:84 NoSuchMethod o...
  TestAclCLI.testAll:82->CLITestHelper.testAll:319->execute:76 NoSuchMethod org....
  TestXAttrCLI.testAll:96->CLITestHelper.testAll:319->execute:90 NoSuchMethod or...
  TestHDFSCLI.testAll:104->CLITestHelper.testAll:319->execute:98 NoSuchMethod or...

Tests run: 3681, Failures: 12, Errors: 12, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:13 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:52 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.047 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:55 h
[INFO] Finished at: 2015-09-15T03:27:48+00:00
[INFO] Final Memory: 55M/654M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5376562 bytes
Compression is 0.0%
Took 6.8 sec
Recording test results
Updating HDFS-8829
Updating YARN-4151
Updating HDFS-8996

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #369

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/369/changes>

Changes:

[stevel] HDFS-9069. TestNameNodeMetricsLogger failing -port in use. (stevel)

------------------------------------------
[...truncated 7903 lines...]
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.225 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.597 sec - in org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.448 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.196 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.295 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.238 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.748 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.65 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.858 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.53 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.009 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.528 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.544 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 142.266 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.424 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.528 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.826 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.271 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.64 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.395 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.968 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.867 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.568 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.557 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.587 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.349 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.517 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.91 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.342 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.475 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.854 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.604 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.442 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.879 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.297 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.844 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.836 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.098 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.259 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.367 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.78 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.501 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.645 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestNameNodeResourceChecker.testCheckThatNameNodeResourceMonitorIsRunning:130 NN should be in safe mode after resources crossed threshold
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>

Tests in error: 
  TestDFSClientRetries.testGetFileChecksum:817 » Runtime java.util.zip.ZipExcept...
  TestDFSClientRetries.testLeaseRenewSocketTimeout:370 » Runtime java.util.zip.Z...

Tests run: 3672, Failures: 7, Errors: 2, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:11 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:52 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.059 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:55 h
[INFO] Finished at: 2015-09-14T12:47:17+00:00
[INFO] Final Memory: 56M/669M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5373072 bytes
Compression is 0.0%
Took 9.7 sec
Recording test results
Updating HDFS-9069

Hadoop-Hdfs-trunk-Java8 - Build # 369 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/369/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8096 lines...]
main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:11 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:52 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.059 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:55 h
[INFO] Finished at: 2015-09-14T12:47:17+00:00
[INFO] Final Memory: 56M/669M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5373072 bytes
Compression is 0.0%
Took 9.7 sec
Recording test results
Updating HDFS-9069
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestDFSClientRetries.testGetFileChecksum

Error Message:
java.util.zip.ZipException: invalid code lengths set

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid code lengths set
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.get(Configuration.java:1294)
	at org.apache.hadoop.hdfs.MiniDFSCluster.determineDfsBaseDir(MiniDFSCluster.java:2629)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:785)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:478)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:437)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.testGetFileChecksum(TestDFSClientRetries.java:817)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSClientRetries.testLeaseRenewSocketTimeout

Error Message:
java.util.zip.ZipException: invalid code lengths set

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid code lengths set
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1349)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.testLeaseRenewSocketTimeout(TestDFSClientRetries.java:370)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker.testCheckThatNameNodeResourceMonitorIsRunning

Error Message:
NN should be in safe mode after resources crossed threshold

Stack Trace:
java.lang.AssertionError: NN should be in safe mode after resources crossed threshold
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker.testCheckThatNameNodeResourceMonitorIsRunning(TestNameNodeResourceChecker.java:130)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #368

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/368/changes>

Changes:

[jianhe] YARN-4126. RM should not issue delegation tokens in unsecure mode. Contributed by Bibin A Chundatt

------------------------------------------
[...truncated 8014 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.992 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.735 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.504 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.098 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.215 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.111 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.296 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.877 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.274 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.45 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.25 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.267 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.275 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.338 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.423 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.874 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.311 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.045 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.205 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.875 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.539 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.756 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.297 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.706 sec - in org.apache.hadoop.security.TestPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.127 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 5.943 sec <<< FAILURE! - in org.apache.hadoop.tools.TestJMXGet
testDataNode(org.apache.hadoop.tools.TestJMXGet)  Time elapsed: 4.299 sec  <<< FAILURE!
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)

testNameNode(org.apache.hadoop.tools.TestJMXGet)  Time elapsed: 1.548 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.522 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.58 sec - in org.apache.hadoop.tracing.TestTracing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.613 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.722 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.378 sec - in org.apache.hadoop.net.TestNetworkTopology
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.188 sec - in org.apache.hadoop.TestGenericRefresh
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.879 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.849 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.491 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.969 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.703 sec - in org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.056 sec - in org.apache.hadoop.cli.TestDeleteCLI

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit
  TestDirectoryScanner.testDirectoryScanner:402->runTest:505->scan:290->scan:303 expected:<10> but was:<8>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 32
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 50
	 done: false
] expected:<0> but was:<2>
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:426->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:445 » EOF
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » Connect Call From asf904.gq1.yg...

Tests run: 3673, Failures: 9, Errors: 4, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:22 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-14T09:42:24+00:00
[INFO] Final Memory: 55M/574M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5377502 bytes
Compression is 0.0%
Took 8.8 sec
Recording test results
Updating YARN-4126

Hadoop-Hdfs-trunk-Java8 - Build # 368 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/368/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8207 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:22 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-14T09:42:24+00:00
[INFO] Final Memory: 55M/574M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5377502 bytes
Compression is 0.0%
Took 8.8 sec
Recording test results
Updating YARN-4126
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
13 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1809)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":58336; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":58336; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1449)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy22.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:58336 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:58336 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:633)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1511)
	at org.apache.hadoop.ipc.Client.call(Client.java:1415)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy22.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testDirectoryScanner

Error Message:
expected:<10> but was:<8>

Stack Trace:
java.lang.AssertionError: expected:<10> but was:<8>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.scan(TestDirectoryScanner.java:303)
	at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.scan(TestDirectoryScanner.java:290)
	at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.runTest(TestDirectoryScanner.java:505)
	at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testDirectoryScanner(TestDirectoryScanner.java:402)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 32
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 50
  done: false
] expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 32
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 50
	 done: false
] expected:<0> but was:<2>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #367

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/367/changes>

Changes:

[kasha] YARN-3697. FairScheduler: ContinuousSchedulingThread can fail to shutdown. (Zhihai Xu via kasha)

------------------------------------------
[...truncated 7096 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.056 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec - in org.apache.hadoop.hdfs.server.namenode.TestXAttrFeature
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.759 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsck
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.432 sec - in org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgressMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.339 sec - in org.apache.hadoop.hdfs.server.namenode.startupprogress.TestStartupProgress
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.838 sec - in org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.604 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.951 sec - in org.apache.hadoop.hdfs.server.datanode.TestDiskError
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.655 sec - in org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.249 sec - in org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestHdfsServerConstants
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.server.datanode.TestHdfsServerConstants
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.902 sec - in org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestHSync
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.736 sec - in org.apache.hadoop.hdfs.server.datanode.TestHSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.593 sec - in org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.503 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.373 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestAvailableSpaceVolumeChoosingPolicy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.282 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.TestAvailableSpaceVolumeChoosingPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 147.437 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter
testSynchronousEviction(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter)  Time elapsed: 13.084 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.132 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.707 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.746 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
testReleaseOnEviction(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory)  Time elapsed: 11.263 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.412 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.738 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.73 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.602 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.629 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.865 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 36.184 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
testSynchronousEviction(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement)  Time elapsed: 13.086 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)

testFallbackToDiskFull(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement)  Time elapsed: 0.783 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 1
	 done: false
] expected:<0> but was:<3>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>

Tests in error: 
  TestSecurityTokenEditLog.testEditLog:111 » NoClassDefFound org/apache/hadoop/s...
  TestFailureToReadEdits.setUpCluster:112 » Bind Port in use: localhost:10001

Tests run: 2026, Failures: 7, Errors: 2, Skipped: 10

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:16 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:20 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:23 h
[INFO] Finished at: 2015-09-14T05:15:15+00:00
[INFO] Final Memory: 72M/993M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8578996443736824591.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire8432387045496321575tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_887576893387536073199tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5372917 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-3697

Hadoop-Hdfs-trunk-Java8 - Build # 367 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/367/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7289 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:16 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:20 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:23 h
[INFO] Finished at: 2015-09-14T05:15:15+00:00
[INFO] Final Memory: 72M/993M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8578996443736824591.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire8432387045496321575tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_887576893387536073199tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5372917 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-3697
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog.testEditLog

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:447)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:340)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:114)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:290)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:126)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:839)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:663)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1570)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1213)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:976)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:887)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:819)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:478)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:437)
	at org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog.testEditLog(TestSecurityTokenEditLog.java:111)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.testCheckpointStartingMidEditsFile[0]

Error Message:
Port in use: localhost:10001

Stack Trace:
java.net.BindException: Port in use: localhost:10001
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:414)
	at sun.nio.ch.Net.bind(Net.java:406)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
	at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882)
	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:839)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:663)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1570)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1213)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:976)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:887)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:819)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:478)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:437)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:112)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 1
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 1
	 done: false
] expected:<0> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #366

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/366/changes>

Changes:

[kasha] YARN-2005. Blacklisting support for scheduling AMs. (Anubhav Dhoot via kasha)

------------------------------------------
[...truncated 8000 lines...]
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.288 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.252 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.766 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.601 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.011 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.268 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.182 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.211 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.633 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.715 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.527 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.792 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.304 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.099 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.125 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.976 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.9 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.698 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.273 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.447 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.103 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.433 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.701 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.82 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.684 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.163 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.959 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.095 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.734 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.027 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.767 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.186 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.344 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.87 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.625 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.346 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.701 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 28
	 done: false
] expected:<0> but was:<2>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:426->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:445 » EOF
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » Connect Call From asf905.gq1.yg...

Tests run: 3673, Failures: 8, Errors: 4, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:11 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.095 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-14T03:42:31+00:00
[INFO] Final Memory: 55M/675M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5373309 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating YARN-2005

Hadoop-Hdfs-trunk-Java8 - Build # 366 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/366/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8193 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:11 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.095 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-14T03:42:31+00:00
[INFO] Final Memory: 55M/675M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5373309 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating YARN-2005
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
12 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1809)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
End of File Exception between local host is: "asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: "localhost":46362; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf905.gq1.ygridcore.net/67.195.81.149"; destination host is: "localhost":46362; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1449)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy25.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf905.gq1.ygridcore.net/67.195.81.149 to localhost:46362 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf905.gq1.ygridcore.net/67.195.81.149 to localhost:46362 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:633)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1511)
	at org.apache.hadoop.ipc.Client.call(Client.java:1415)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy25.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 28
  done: false
] expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 28
	 done: false
] expected:<0> but was:<2>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #365

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/365/changes>

Changes:

[stevel] HADOOP-12087. [JDK8] Fix javadoc errors caused by incorrect or illegal tags. (Akira AJISAKA via stevel).

------------------------------------------
[...truncated 7912 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.239 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.084 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.525 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.516 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.508 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.046 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.211 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.564 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.902 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.49 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.496 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.649 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.598 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.395 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.12 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.965 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.852 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.884 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.651 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.864 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.335 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.46 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.342 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.757 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.34 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.654 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.153 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.318 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.051 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.268 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.766 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.8 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.122 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.382 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.063 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.032 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.468 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.779 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 24
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 43
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 42
	 done: false
] expected:<0> but was:<3>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>

Tests in error: 
  TestNameNodeMetricsLogger.testDisableMetricsLogger:66->makeNameNode:118 » Bind
  TestNodeCount.testNodeCount:130->checkTimeout:146->checkTimeout:156 Timeout Ti...

Tests run: 3671, Failures: 7, Errors: 2, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:27 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:54 h
[INFO] Finished at: 2015-09-13T16:43:29+00:00
[INFO] Final Memory: 55M/491M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5285249855051866221.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire4220397878488451621tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4114397840772839427134tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5372697 bytes
Compression is 0.0%
Took 6.1 sec
Recording test results
Updating HADOOP-12087

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #364

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/364/changes>

Changes:

[stevel] syncing branch-2 and trunk CHANGES.TXT to be closer together

[stevel] HADOOP-12407. Test failing: hadoop.ipc.TestSaslRPC. (stevel)

[wheat9] HDFS-9041. Move entries in META-INF/services/o.a.h.fs.FileSystem to hdfs-client. Contributed by Mingliang Liu.

------------------------------------------
[...truncated 7856 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.953 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.245 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.817 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.59 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.264 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.583 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.087 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.804 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.545 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.2 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.14 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.208 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.784 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.259 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.498 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.389 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.135 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.018 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.909 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.116 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.67 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.936 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.579 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.494 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.786 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.559 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.565 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.018 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.133 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.07 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.705 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.075 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.682 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.764 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.249 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.39 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.443 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.972 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.556 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 27
	 done: false
] expected:<0> but was:<2>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>

Tests run: 3672, Failures: 7, Errors: 0, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:09 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.112 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:54 h
[INFO] Finished at: 2015-09-12T23:44:29+00:00
[INFO] Final Memory: 55M/755M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5373222 bytes
Compression is 0.0%
Took 5.6 sec
Recording test results
Updating HADOOP-12407
Updating HDFS-9041

Hadoop-Hdfs-trunk-Java8 - Build # 364 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/364/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8049 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:09 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.112 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:54 h
[INFO] Finished at: 2015-09-12T23:44:29+00:00
[INFO] Final Memory: 55M/755M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5373222 bytes
Compression is 0.0%
Took 5.6 sec
Recording test results
Updating HADOOP-12407
Updating HDFS-9041
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
7 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 27
  done: false
] expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 27
	 done: false
] expected:<0> but was:<2>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #363

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/363/changes>

Changes:

[stevel] fix trunk/hadoop-common CHANGES.TXT to be the reference across trunk & branch-2

------------------------------------------
[...truncated 8013 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.866 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.616 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.117 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.763 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.213 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.139 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.179 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.18 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.565 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.254 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.372 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.452 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.298 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.134 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.891 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.416 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.297 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.218 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.401 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.277 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.599 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.059 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.889 sec - in org.apache.hadoop.security.TestPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.108 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 6.211 sec <<< FAILURE! - in org.apache.hadoop.tools.TestJMXGet
testDataNode(org.apache.hadoop.tools.TestJMXGet)  Time elapsed: 4.523 sec  <<< FAILURE!
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)

testNameNode(org.apache.hadoop.tools.TestJMXGet)  Time elapsed: 1.592 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.523 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.465 sec - in org.apache.hadoop.tracing.TestTracing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.664 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.673 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.397 sec - in org.apache.hadoop.net.TestNetworkTopology
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.146 sec - in org.apache.hadoop.TestGenericRefresh
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.81 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.884 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.294 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.977 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.619 sec - in org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.093 sec - in org.apache.hadoop.cli.TestDeleteCLI

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit
  TestDistributedFileSystem.testDFSClientPeerWriteTimeout:1043 write timedout too late in 1256 ms
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 19
	 done: false
] expected:<0> but was:<3>
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » EOF End of File Exception betwe...
  TestBalancerWithMultipleNameNodes.testBalancer » Remote File /tmp.txt could on...

Tests run: 3673, Failures: 9, Errors: 4, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:14 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.075 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-12T20:41:53+00:00
[INFO] Final Memory: 55M/578M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5376954 bytes
Compression is 0.0%
Took 8.5 sec
Recording test results

Hadoop-Hdfs-trunk-Java8 - Build # 363 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/363/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8206 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:14 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:49 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.075 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-12T20:41:53+00:00
[INFO] Final Memory: 55M/578M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5376954 bytes
Compression is 0.0%
Took 8.5 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
13 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1256 ms

Stack Trace:
java.lang.AssertionError: write timedout too late in 1256 ms
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1043)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1809)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":60316; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":60316; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1449)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy25.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


FAILED:  org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer

Error Message:
File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1575)
 at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:281)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2413)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2230)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2226)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1575)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:281)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2413)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2230)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2226)

	at org.apache.hadoop.ipc.Client.call(Client.java:1445)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:419)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1629)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1430)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:573)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 19
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 19
	 done: false
] expected:<0> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #362

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/362/changes>

Changes:

[vinayakumarb] HDFS-9036. In BlockPlacementPolicyWithNodeGroup#chooseLocalStorage , random node is selected eventhough fallbackToLocalRack is true. (Contributed by J.Andreina)

------------------------------------------
[...truncated 7931 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.236 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.844 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.636 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.787 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.46 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.216 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.456 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.626 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.672 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.317 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.424 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.08 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.155 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.407 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.122 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.008 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.605 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.282 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.527 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.067 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.224 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.454 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.428 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.685 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.416 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.371 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.583 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.574 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.834 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.108 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.143 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.568 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.108 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.236 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.384 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.716 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.474 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.645 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 23
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 21
	 done: false
] expected:<0> but was:<3>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestDataNodeMetrics.testDataNodeTimeSpend:288 null

Tests in error: 
  TestNameNodeMetricsLogger.testMetricsLoggerOnByDefault:60->makeNameNode:118 » Bind
  TestBalancerWithMultipleNameNodes.testBalancer » Remote File /tmp.txt could on...

Tests run: 3672, Failures: 8, Errors: 2, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:15 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.051 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-12T15:42:01+00:00
[INFO] Final Memory: 55M/691M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5371615 bytes
Compression is 0.0%
Took 7.9 sec
Recording test results
Updating HDFS-9036

Hadoop-Hdfs-trunk-Java8 - Build # 362 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/362/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8124 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:15 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.051 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-12T15:42:01+00:00
[INFO] Final Memory: 55M/691M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5371615 bytes
Compression is 0.0%
Took 7.9 sec
Recording test results
Updating HDFS-9036
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
10 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer

Error Message:
File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1575)
 at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:281)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2413)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2230)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2226)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1575)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:281)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2413)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2230)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2226)

	at org.apache.hadoop.ipc.Client.call(Client.java:1445)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:419)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1629)
	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1430)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:573)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:288)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger.testMetricsLoggerOnByDefault

Error Message:
Problem binding to [localhost:37668] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:37668] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:414)
	at sun.nio.ch.Net.bind(Net.java:406)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.apache.hadoop.ipc.Server.bind(Server.java:469)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:646)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:2396)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:945)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
	at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
	at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:357)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:760)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:671)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger$TestNameNode.<init>(TestNameNodeMetricsLogger.java:142)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger.makeNameNode(TestNameNodeMetricsLogger.java:118)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger.testMetricsLoggerOnByDefault(TestNameNodeMetricsLogger.java:60)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


FAILED:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 23
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 21
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 23
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 21
	 done: false
] expected:<0> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #361

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/361/changes>

Changes:

[vinayakumarb] HDFS-9042. Update document for the Storage policy name (Contributed by J.Andreina)

------------------------------------------
[...truncated 7970 lines...]
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.237 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.959 sec - in org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.664 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.316 sec - in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.761 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.239 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 44.937 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
testDFSClientPeerWriteTimeout(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time elapsed: 2.039 sec  <<< FAILURE!
java.lang.AssertionError: write timedout too late in 1255 ms
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1043)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.426 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 125.919 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.987 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.295 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.03 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.258 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.336 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.414 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.126 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.985 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.653 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.204 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.351 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.48 sec - in org.apache.hadoop.hdfs.TestReplication
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.971 sec - in org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.613 sec - in org.apache.hadoop.hdfs.TestPipelines
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.469 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.584 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.468 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.494 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.072 sec - in org.apache.hadoop.hdfs.TestFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.321 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.333 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.849 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.128 sec - in org.apache.hadoop.hdfs.TestConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.763 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.54 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.12 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.194 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.79 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.813 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.009 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.587 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Failed tests: 
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 45
	 done: false
] expected:<0> but was:<1>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit
  TestDistributedFileSystem.testDFSClientPeerWriteTimeout:1043 write timedout too late in 1255 ms

Tests in error: 
  TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack:368->doTestSingleRackClusterIsSufficientlyReplicated:376->addBlockOnNodes:443 WrongTypeOfReturnValue
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » EOF End of File Exception betwe...

Tests run: 3672, Failures: 9, Errors: 4, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:32 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.061 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:54 h
[INFO] Finished at: 2015-09-12T10:43:40+00:00
[INFO] Final Memory: 55M/564M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5371555 bytes
Compression is 0.0%
Took 8.8 sec
Recording test results
Updating HDFS-9042

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #360

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/360/changes>

Changes:

[rkanter] YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot via rkanter)

[rkanter] YARN-4145. Make RMHATestBase abstract so its not run when running all tests under that namespace (adhoot via rkanter)

[kihwal] Fix up CHANGES.txt

[rkanter] HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter. (zxu via rkanter)

------------------------------------------
[...truncated 8086 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.112 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.819 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.043 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.968 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.902 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.112 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.417 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.177 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.125 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.511 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.458 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.245 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.932 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.654 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.456 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.158 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.506 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.453 sec - in org.apache.hadoop.TestRefreshCallQueue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.464 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.119 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.987 sec - in org.apache.hadoop.security.TestPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.133 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 5.99 sec <<< FAILURE! - in org.apache.hadoop.tools.TestJMXGet
testDataNode(org.apache.hadoop.tools.TestJMXGet)  Time elapsed: 4.322 sec  <<< FAILURE!
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)

testNameNode(org.apache.hadoop.tools.TestJMXGet)  Time elapsed: 1.585 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.519 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.501 sec - in org.apache.hadoop.tracing.TestTracing
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.612 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.744 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.379 sec - in org.apache.hadoop.net.TestNetworkTopology
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.323 sec - in org.apache.hadoop.TestGenericRefresh
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.679 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.829 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.331 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.834 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.853 sec - in org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.383 sec - in org.apache.hadoop.cli.TestDeleteCLI

Results :

Failed tests: 
  TestLeaseRecovery2.tearDown:104 Test resulted in an unexpected exit
  TestDistributedFileSystem.testDFSClientPeerWriteTimeout:1043 write timedout too late in 1376 ms
  TestLazyPersistReplicaPlacement.testSynchronousEviction:92->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistReplicaPlacement.testFallbackToDiskFull:107->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyPersistLockedMemory.testReleaseOnEviction:122->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestLazyWriter.testSynchronousEviction:75->LazyPersistTestCase.verifyRamDiskJMXMetric:483 expected:<1> but was:<0>
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 42
	 done: false
] expected:<0> but was:<3>
  TestJMXGet.testDataNode:154 expected:<8192> but was:<0>
  TestJMXGet.testNameNode:110 expected:<2> but was:<0>

Tests in error: 
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart:421->hardLeaseRecoveryRestartHelper:493 » Exit
  TestLeaseRecovery2.testSoftLeaseRecovery:335 » IllegalState Lease monitor is n...
  TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2:426->hardLeaseRecoveryRestartHelper:445 » EOF
  TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart:432->hardLeaseRecoveryRestartHelper:445 » Connect
  TestLeaseRecovery2.testLeaseRecoverByAnotherUser:158 » IllegalState Lease moni...
  TestLeaseRecovery2.testHardLeaseRecovery:275 » Connect Call From asf904.gq1.yg...
  TestDFSRemove.testRemove:89 » NoClassDefFound org/apache/hadoop/util/Intrusive...

Tests run: 3672, Failures: 9, Errors: 7, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:23 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.076 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-12T01:41:52+00:00
[INFO] Final Memory: 55M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5377081 bytes
Compression is 0.0%
Took 7.3 sec
Recording test results
Updating HADOOP-12348
Updating YARN-4115
Updating YARN-4145

Hadoop-Hdfs-trunk-Java8 - Build # 360 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/360/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8279 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:23 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.076 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-09-12T01:41:52+00:00
[INFO] Final Memory: 55M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5377081 bytes
Compression is 0.0%
Took 7.3 sec
Recording test results
Updating HADOOP-12348
Updating YARN-4115
Updating YARN-4145
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
16 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestDFSRemove.testRemove

Error Message:
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator
	at org.apache.hadoop.util.IntrusiveCollection.iterator(IntrusiveCollection.java:213)
	at org.apache.hadoop.util.IntrusiveCollection.clear(IntrusiveCollection.java:368)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.clearPendingCachingCommands(DatanodeManager.java:1587)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1212)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1539)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stopCommonServices(NameNode.java:791)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:956)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1866)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestDFSRemove.testRemove(TestDFSRemove.java:89)


REGRESSION:  org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1376 ms

Stack Trace:
java.lang.AssertionError: write timedout too late in 1376 ms
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1043)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
 at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
 at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
 at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
 at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
 at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart(TestLeaseRecovery2.java:421)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent storage due to No journals available to flush. Unsynced transactions: 1
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1201)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1805)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart(TestLeaseRecovery2.java:421)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1809)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:950)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1912)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1963)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1944)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:493)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart(TestLeaseRecovery2.java:421)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testSoftLeaseRecovery

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testSoftLeaseRecovery(TestLeaseRecovery2.java:335)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":36749; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

Stack Trace:
java.io.EOFException: End of File Exception between local host is: "asf904.gq1.ygridcore.net/67.195.81.148"; destination host is: "localhost":36749; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
	at org.apache.hadoop.ipc.Client.call(Client.java:1449)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy22.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:426)
Caused by: java.io.EOFException: null
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1103)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:998)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:36749 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:36749 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:633)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1511)
	at org.apache.hadoop.ipc.Client.call(Client.java:1415)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy22.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:445)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:432)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser

Error Message:
Lease monitor is not running

Stack Trace:
java.lang.IllegalStateException: Lease monitor is not running
	at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
	at org.apache.hadoop.hdfs.server.namenode.LeaseManager.triggerMonitorCheckNow(LeaseManager.java:448)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter.setLeasePeriod(NameNodeAdapter.java:135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.setLeasePeriod(MiniDFSCluster.java:2586)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testLeaseRecoverByAnotherUser(TestLeaseRecovery2.java:158)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:36749 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf904.gq1.ygridcore.net/67.195.81.148 to localhost:36749 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:712)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:633)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1511)
	at org.apache.hadoop.ipc.Client.call(Client.java:1415)
	at org.apache.hadoop.ipc.Client.call(Client.java:1376)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy20.create(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:297)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:251)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy22.create(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1225)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1156)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:431)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:275)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory.testReleaseOnEviction(TestLazyPersistLockedMemory.java:122)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testSynchronousEviction(TestLazyPersistReplicaPlacement.java:92)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement.testFallbackToDiskFull(TestLazyPersistReplicaPlacement.java:107)


REGRESSION:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.verifyRamDiskJMXMetric(LazyPersistTestCase.java:483)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter.testSynchronousEviction(TestLazyWriter.java:75)


REGRESSION:  org.apache.hadoop.tools.TestJMXGet.testDataNode

Error Message:
expected:<8192> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<8192> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testDataNode(TestJMXGet.java:154)


REGRESSION:  org.apache.hadoop.tools.TestJMXGet.testNameNode

Error Message:
expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.tools.TestJMXGet.testNameNode(TestJMXGet.java:110)


FAILED:  org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
	at org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:104)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 42
  done: false
] expected:<0> but was:<3>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-1
	 target length: 50
	 current item: 50
	 done: false
, Circular Writer:
	 directory: /test-2
	 target length: 50
	 current item: 42
	 done: false
] expected:<0> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #359

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/359/changes>

Changes:

[arp] HDFS-9027. Refactor o.a.h.hdfs.DataStreamer#isLazyPersist() method. (Contributed by Mingliang Liu)

------------------------------------------
[...truncated 7817 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.583 sec - in org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.025 sec - in org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.087 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.925 sec - in org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.292 sec - in org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.18 sec - in org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.795 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.043 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.646 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.087 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.932 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.981 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.095 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.366 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.157 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.831 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.307 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.702 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.93 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.531 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.121 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.033 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.274 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec - in org.apache.hadoop.fs.TestXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.495 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.518 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.175 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.135 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.252 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.738 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.139 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.306 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.276 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.067 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.4 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.248 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.815 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.455 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.507 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.53 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.568 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.665 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.267 sec - in org.apache.hadoop.fs.TestGlobPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.298 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.177 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.359 sec - in org.apache.hadoop.fs.TestUnbuffer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed in 8.0
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.231 sec - in org.apache.hadoop.TestGenericRefresh

Results :

Failed tests: 
  TestSeveralNameNodes.testCircularLinkedListWrites:90 Some writers didn't complete in expected runtime! Current writer state:[Circular Writer:
	 directory: /test-0
	 target length: 50
	 current item: 50
	 done: false
] expected:<0> but was:<1>

Tests in error: 
  TestLeaseRecovery2.<clinit>:77 » NoClassDefFound org/apache/hadoop/conf/Config...
  TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2 » NoClassDefFound

Tests run: 3666, Failures: 1, Errors: 2, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:16 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.065 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-09-11T21:33:07+00:00
[INFO] Final Memory: 55M/543M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5378037 bytes
Compression is 0.0%
Took 9.2 sec
Recording test results
Updating HDFS-9027