You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2015/05/21 14:56:19 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #2132

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2132/changes>

Changes:

[wangda] Move YARN-2918 from 2.8.0 to 2.7.1

[xgong] YARN-3681. yarn cmd says "could not find main class 'queue'" in windows.

[jianhe] YARN-3609. Load node labels from storage inside RM serviceStart. Contributed by Wangda Tan

[jianhe] YARN-3654. ContainerLogsPage web UI should not have meta-refresh. Contributed by Xuan Gong

[wheat9] HADOOP-11772. RPC Invoker relies on static ClientCache which has synchronized(this) blocks. Contributed by Haohui Mai.

[aajisaka] HDFS-4383. Document the lease limits. Contributed by Arshad Mohammad.

[aajisaka] HADOOP-10366. Add whitespaces between classes for values in core-default.xml to fit better in browser. Contributed by kanaka kumar avvaru.

------------------------------------------
[...truncated 6221 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.798 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.576 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.349 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.953 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.643 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.208 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.17 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.277 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.641 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.885 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.299 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.192 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.286 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.865 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.868 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.527 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.486 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.078 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.196 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.548 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.774 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.19 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.12 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.029 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.778 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.066 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.118 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.786 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.747 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.487 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Running org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.179 sec - in org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.876 sec - in org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.544 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Running org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.695 sec - in org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.468 sec - in org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.72 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Running org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.198 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.56 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.261 sec - in org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.196 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsck
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.38 sec - in org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.339 sec - in org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Running org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.6 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Running org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.193 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.608 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.288 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.612 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.097 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.511 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.104 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.996 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.508 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.482 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Running org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.888 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.982 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.688 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.872 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.332 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.407 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.292 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.311 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.47 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.905 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.84 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.292 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.156 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.55 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.808 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.055 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.298 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.979 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.698 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions

Results :

Tests in error: 
  TestDFSZKFailoverController>ClientBaseWithFixes.setUp:409->ClientBaseWithFixes.startServer:445->ClientBaseWithFixes.createNewServerInstance:348 » Bind
  TestDFSZKFailoverController.shutdown:114 NullPointer

Tests run: 2262, Failures: 0, Errors: 2, Skipped: 13

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 49.288 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:19 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:20 h
[INFO] Finished at: 2015-05-21T12:55:57+00:00
[INFO] Final Memory: 66M/964M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5171256906604989555.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire7502973680232758651tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_905990491850262438729tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363065 bytes
Compression is 0.0%
Took 8 sec
Recording test results
Updating HDFS-4383
Updating HADOOP-10366
Updating HADOOP-11772
Updating YARN-2918
Updating YARN-3654
Updating YARN-3609
Updating YARN-3681

Hadoop-Hdfs-trunk - Build # 2133 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2133/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6833 lines...]
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.837 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-22T14:20:00+00:00
[INFO] Final Memory: 61M/678M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363209 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)



Hadoop-Hdfs-trunk - Build # 2134 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2134/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6844 lines...]
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.291 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.068 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-23T14:16:49+00:00
[INFO] Final Memory: 60M/719M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362646 bytes
Compression is 0.0%
Took 34 sec
Recording test results
Updating YARN-3701
Updating YARN-3707
Updating MAPREDUCE-6204
Updating HADOOP-11927
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST

Error Message:
dir has ERROR

Stack Trace:
java.lang.IllegalStateException: dir has ERROR
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:430)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.stop(TestAppendSnapshotTruncate.java:483)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST(TestAppendSnapshotTruncate.java:128)
Caused by: java.lang.IllegalStateException: null
	at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.pause(TestAppendSnapshotTruncate.java:479)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.pauseAllFiles(TestAppendSnapshotTruncate.java:247)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:220)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:140)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker$1.run(TestAppendSnapshotTruncate.java:454)
	at java.lang.Thread.run(Thread.java:745)



Hadoop-Hdfs-trunk - Build # 2135 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2135/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8079 lines...]
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.530 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.072 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-24T14:20:12+00:00
[INFO] Final Memory: 54M/685M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362716 bytes
Compression is 0.0%
Took 6.9 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk - Build # 2139 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7189 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.787 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:40 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:41 h
[INFO] Finished at: 2015-05-28T14:16:02+00:00
[INFO] Final Memory: 60M/679M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363168 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating YARN-3700
Updating YARN-3581
Updating MAPREDUCE-6304
Updating YARN-2355
Updating HADOOP-9891
Updating YARN-3722
Updating HDFS-8431
Updating HDFS-8482
Updating HDFS-8135
Updating MAPREDUCE-6336
Updating HDFS-5033
Updating YARN-3626
Updating YARN-3647
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:909)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:905)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1Internal(TestBalancer.java:921)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1(TestBalancer.java:917)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2Internal(TestBalancer.java:948)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2(TestBalancer.java:944)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithIncludeListWithPorts

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithIncludeListWithPorts(TestBalancer.java:1208)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:821)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithExcludeList

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithExcludeList(TestBalancer.java:1103)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithExcludeListWithPorts

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithExcludeListWithPorts(TestBalancer.java:1090)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl.testSkipAclEnforcementSuper

Error Message:
org/apache/hadoop/util/IdentityHashStore$Visitor

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/IdentityHashStore$Visitor
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionGranted(AclTestHelpers.java:137)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementSuper(FSAclBaseTest.java:1191)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.IdentityHashStore$Visitor
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionGranted(AclTestHelpers.java:137)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementSuper(FSAclBaseTest.java:1191)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl.testSkipAclEnforcementPermsDisabled

Error Message:
org/apache/hadoop/util/IdentityHashStore$Visitor

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/IdentityHashStore$Visitor
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionDenied(AclTestHelpers.java:118)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementPermsDisabled(FSAclBaseTest.java:1171)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.IdentityHashStore$Visitor
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionDenied(AclTestHelpers.java:118)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementPermsDisabled(FSAclBaseTest.java:1171)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)



Jenkins build is back to normal : Hadoop-Hdfs-trunk #2141

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2141/changes>


Build failed in Jenkins: Hadoop-Hdfs-trunk #2140

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/changes>

Changes:

[aw] HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey via aw)

[aw] HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo Seki via aw)

[aw] HADOOP-12030. test-patch should only report on newly introduced findbugs warnings. (Sean Busbey via aw)

[xgong] YARN-3723. Need to clearly document primaryFilter and otherInfo value

[aw] HADOOP-11406. xargs -P is not portable (Kengo Seki via aw)

[aw] HADOOP-11142. Remove hdfs dfs reference from file system shell documentation (Kengo Seki via aw)

[aw] HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts (Kengo Seki via aw)

[aw] HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do (Sangjin Lee via aw)

[cmccabe] HDFS-8407. libhdfs hdfsListDirectory must set errno to 0 on success (Masatake Iwasaki via Colin P. McCabe)

[cmccabe] HDFS-8429. Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread.  (zhouyingchao via cmccabe)

[cmccabe] HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake Iwasaki via Colin P. McCabe)

[aw] HADOOP-11930. test-patch in offline mode should tell maven to be in offline mode (Sean Busbey via aw)

[cnauroth] HADOOP-11959. WASB should configure client side socket timeout in storage client blob request options. Contributed by Ivan Mitic.

[aw]  HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits (aw)

[cnauroth] HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop. Contributed by Larry McCay.

[vinodkv] Fixed more FilesSystemRMStateStore issues. Contributed by Vinod Kumar Vavilapalli.

[wangda] YARN-3716. Node-label-expression should be included by ResourceRequestPBImpl.toString. (Xianyin Xin via wangda)

[aajisaka] HDFS-8443. Document dfs.namenode.service.handler.count in hdfs-site.xml. Contributed by J.Andreina.

[vinayakumarb] HDFS-7401. Add block info to DFSInputStream' WARN message when it adds node to deadNodes (Contributed by Arshad Mohammad)

[vinayakumarb] HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by Andreina J)

------------------------------------------
[...truncated 6171 lines...]
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.671 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.614 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.731 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.718 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.317 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.242 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.881 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.731 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.908 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.875 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.474 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.49 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.633 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.959 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.269 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.766 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.556 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.15 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.758 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.807 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.105 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.171 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.825 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.72 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.286 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Running org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.232 sec - in org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.824 sec - in org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.504 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Running org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.796 sec - in org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.477 sec - in org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.935 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Running org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.369 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.439 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.475 sec - in org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.631 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsck
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec - in org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.342 sec - in org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Running org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.611 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Running org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.171 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.373 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.944 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.357 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.099 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.522 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.128 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.942 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.795 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.379 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Running org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.878 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.755 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.292 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.827 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.346 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.353 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.442 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.392 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.302 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.784 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.007 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.142 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.054 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.506 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.762 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.079 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.239 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.048 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.995 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.943 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.021 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints

Results :

Tests run: 2260, Failures: 0, Errors: 0, Skipped: 13

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 54.973 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:24 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:25 h
[INFO] Finished at: 2015-05-29T13:00:37+00:00
[INFO] Final Memory: 76M/931M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7381484223056387280.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5655336589549814393tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1014370353868269233357tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363207 bytes
Compression is 0.0%
Took 9.2 sec
Recording test results
Updating HADOOP-11983
Updating HADOOP-11934
Updating HADOOP-11894
Updating HADOOP-11959
Updating HADOOP-11142
Updating HADOOP-12004
Updating HDFS-7401
Updating HDFS-8443
Updating YARN-3723
Updating HDFS-8407
Updating HDFS-8429
Updating HADOOP-12035
Updating HADOOP-11406
Updating HADOOP-11930
Updating HADOOP-12022
Updating HADOOP-12030
Updating HADOOP-7947
Updating HADOOP-12042
Updating YARN-3716

Hadoop-Hdfs-trunk - Build # 2140 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6364 lines...]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 54.973 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:24 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:25 h
[INFO] Finished at: 2015-05-29T13:00:37+00:00
[INFO] Final Memory: 76M/931M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7381484223056387280.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5655336589549814393tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1014370353868269233357tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363207 bytes
Compression is 0.0%
Took 9.2 sec
Recording test results
Updating HADOOP-11983
Updating HADOOP-11934
Updating HADOOP-11894
Updating HADOOP-11959
Updating HADOOP-11142
Updating HADOOP-12004
Updating HDFS-7401
Updating HDFS-8443
Updating YARN-3723
Updating HDFS-8407
Updating HDFS-8429
Updating HADOOP-12035
Updating HADOOP-11406
Updating HADOOP-11930
Updating HADOOP-12022
Updating HADOOP-12030
Updating HADOOP-7947
Updating HADOOP-12042
Updating YARN-3716
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2139

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/changes>

Changes:

[wheat9] Update CHANGES.txt for HDFS-8135.

[wangda] YARN-3647. RMWebServices api's should use updated api from CommonNodeLabelsManager to get NodeLabel object. (Sunil G via wangda)

[wangda] MAPREDUCE-6304. Specifying node labels when submitting MR jobs. (Naganarasimha G R via wangda)

[cnauroth] YARN-3626. On Windows localized resources are not moved to the front of the classpath when they should be. Contributed by Craig Welch.

[gera] MAPREDUCE-6336. Enable v2 FileOutputCommitter by default. (Siqi Li via gera)

[wangda] YARN-3581. Deprecate -directlyAccessNodeLabelStore in RMAdminCLI. (Naganarasimha G R via wangda)

[wang] HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang.

[aw] HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw)

[aw] YARN-2355. MAX_APP_ATTEMPTS_ENV may no longer be a useful env var for a container (Darrell Taylor via aw)

[aw] HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source (Darrell Taylor via aw)

[zjshen] YARN-3700. Made generic history service load a number of latest applications according to the parameter or the configuration. Contributed by Xuan Gong.

[cnauroth] HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer.

[devaraj] YARN-3722. Merge multiple TestWebAppUtils into

------------------------------------------
[...truncated 6996 lines...]
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.407 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.429 sec - in org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.502 sec - in org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.969 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.459 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.349 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.692 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.107 sec - in org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.77 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.722 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.016 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.377 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.244 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.067 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.664 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.085 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.363 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.618 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.296 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.049 sec - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.378 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.911 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.133 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 178.364 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.102 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.43 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.804 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.042 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.09 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.044 sec - in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.387 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.058 sec - in org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.706 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.062 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.447 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.625 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.492 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.857 sec - in org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.59 sec - in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.027 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.534 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.086 sec - in org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.696 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.006 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.122 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.606 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.427 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.221 sec - in org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.475 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.925 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.804 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.659 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.797 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.031 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.556 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.34 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.059 sec - in org.apache.hadoop.security.TestRefreshUserMappings

Results :

Tests in error: 
  TestNameNodeAcl>FSAclBaseTest.testSkipAclEnforcementSuper:1191 » NoClassDefFound
  TestNameNodeAcl>FSAclBaseTest.testSkipAclEnforcementPermsDisabled:1171 » NoClassDefFound
  TestBalancer.testBalancer0:905->testBalancer0Internal:909->initConf:116 » Runtime
  TestBalancer.testBalancer1:917->testBalancer1Internal:921->initConf:116 » Runtime
  TestBalancer.testBalancer2:944->testBalancer2Internal:948->initConf:116 » Runtime
  TestBalancer.testBalancerCliWithIncludeListWithPorts:1208->initConf:116 » Runtime
  TestBalancer.testUnknownDatanode:821->initConf:116 » Runtime java.util.zip.Zip...
  TestBalancer.testBalancerCliWithExcludeList:1103->initConf:116 » Runtime java....
  TestBalancer.testBalancerWithExcludeListWithPorts:1090->initConf:116 » Runtime

Tests run: 3439, Failures: 0, Errors: 9, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.787 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:40 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:41 h
[INFO] Finished at: 2015-05-28T14:16:02+00:00
[INFO] Final Memory: 60M/679M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363168 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating YARN-3700
Updating YARN-3581
Updating MAPREDUCE-6304
Updating YARN-2355
Updating HADOOP-9891
Updating YARN-3722
Updating HDFS-8431
Updating HDFS-8482
Updating HDFS-8135
Updating MAPREDUCE-6336
Updating HDFS-5033
Updating YARN-3626
Updating YARN-3647

Build failed in Jenkins: Hadoop-Hdfs-trunk #2138

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2138/changes>

Changes:

[ozawa] MAPREDUCE-6364. Add a Kill link to Task Attempts page. Contributed by Ryu Kobayashi.

[vinodkv] YARN-160. Enhanced NodeManager to automatically obtain cpu/memory values from underlying OS when configured to do so. Contributed by Varun Vasudev.

[jianhe] YARN-3632. Ordering policy should be allowed to reorder an application when demand changes. Contributed by Craig Welch

[cmccabe] HADOOP-11969. ThreadLocal initialization in several classes is not thread safe (Sean Busbey via Colin P. McCabe)

[wangda] YARN-3686. CapacityScheduler should trim default_node_label_expression. (Sunil G via wangda)

[aajisaka] HADOOP-11242. Record the time of calling in tracing span of IPC server. Contributed by Mastake Iwasaki.

------------------------------------------
[...truncated 6656 lines...]
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.414 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.49 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.731 sec - in org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.587 sec - in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestTokenAspect
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.466 sec - in org.apache.hadoop.hdfs.web.TestTokenAspect
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.213 sec - in org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.276 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.697 sec - in org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.922 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.248 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.622 sec - in org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.196 sec - in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 28.729 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.918 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.749 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.583 sec - in org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.875 sec - in org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.201 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.374 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Running org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.405 sec - in org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.082 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.698 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.922 sec - in org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.476 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.941 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.08 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.714 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.474 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.698 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.773 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.748 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.19 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.825 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.641 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.457 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.482 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.465 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.17 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.551 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.39 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.463 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.06 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.461 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.043 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.72 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.605 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.723 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.999 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.39 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.504 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.696 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.089 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.143 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.633 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.905 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.253 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.619 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.764 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.96 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.863 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.821 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.236 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.023 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.176 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.88 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.169 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.706 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.434 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.946 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.467 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.421 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.916 sec - in org.apache.hadoop.TestGenericRefresh

Results :

Tests in error: 
  TestDFSUpgradeWithHA.testFinalizeWithJournalNodes:428 » IO java.lang.RuntimeEx...

Tests run: 3438, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.692 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-27T14:24:57+00:00
[INFO] Final Memory: 55M/705M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362660 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-3686
Updating HADOOP-11242
Updating YARN-160
Updating HADOOP-11969
Updating YARN-3632
Updating MAPREDUCE-6364

Hadoop-Hdfs-trunk - Build # 2138 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2138/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6849 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.692 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-27T14:24:57+00:00
[INFO] Final Memory: 55M/705M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362660 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-3686
Updating HADOOP-11242
Updating YARN-160
Updating HADOOP-11969
Updating YARN-3632
Updating MAPREDUCE-6364
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testFinalizeWithJournalNodes

Error Message:
java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out

Stack Trace:
java.io.IOException: java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.doGetUrl(TransferFsImage.java:414)
	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:399)
	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.downloadImageToStorage(TransferFsImage.java:116)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.downloadImage(BootstrapStandby.java:318)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.doRun(BootstrapStandby.java:204)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.access$000(BootstrapStandby.java:76)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:114)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:110)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:110)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:421)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testFinalizeWithJournalNodes(TestDFSUpgradeWithHA.java:428)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2137

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2137/changes>

Changes:

[xgong] YARN-2238. Filtering on UI sticks even if I move away from the page.

[aajisaka] HADOOP-8751. NPE in Token.toString() when Token is constructed using null identifier. Contributed by kanaka kumar avvaru.

[ozawa] YARN-2336. Fair scheduler's REST API returns a missing '[' bracket JSON for deep queue tree. Contributed by Kenji Kikushima and Akira Ajisaka.

------------------------------------------
[...truncated 7880 lines...]
     [exec] 2015-05-26 14:16:25,111 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-26 14:16:25,111 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-26 14:16:25,113 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 60336
     [exec] 2015-05-26 14:16:25,113 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-26 14:16:25,170 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:60336
     [exec] 2015-05-26 14:16:25,299 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(159)) - Listening HTTP traffic on /127.0.0.1:36631
     [exec] 2015-05-26 14:16:25,301 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-26 14:16:25,301 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-26 14:16:25,314 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-26 14:16:25,315 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 43648
     [exec] 2015-05-26 14:16:25,323 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:43648
     [exec] 2015-05-26 14:16:25,335 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-26 14:16:25,338 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-26 14:16:25,348 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:56314 starting to offer service
     [exec] 2015-05-26 14:16:25,353 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-26 14:16:25,573 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 43648: starting
     [exec] 2015-05-26 14:16:25,782 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 27540@asf909.gq1.ygridcore.net
     [exec] 2015-05-26 14:16:25,782 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,782 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-26 14:16:25,823 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454>
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-648551470-67.195.81.153-1432649783454 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454/current>
     [exec] 2015-05-26 14:16:25,826 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 27540@asf909.gq1.ygridcore.net
     [exec] 2015-05-26 14:16:25,827 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,827 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454>
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-648551470-67.195.81.153-1432649783454 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454/current>
     [exec] 2015-05-26 14:16:25,863 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=1443975048;bpid=BP-648551470-67.195.81.153-1432649783454;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=1443975048;c=0;bpid=BP-648551470-67.195.81.153-1432649783454;dnuuid=null
     [exec] 2015-05-26 14:16:25,865 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,886 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3
     [exec] 2015-05-26 14:16:25,887 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-26 14:16:25,887 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c
     [exec] 2015-05-26 14:16:25,887 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-26 14:16:25,890 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-26 14:16:25,897 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432669258897 with interval 21600000
     [exec] 2015-05-26 14:16:25,897 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,898 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-26 14:16:25,899 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-26 14:16:25,910 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-648551470-67.195.81.153-1432649783454 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 10ms
     [exec] 2015-05-26 14:16:25,910 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-648551470-67.195.81.153-1432649783454 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 11ms
     [exec] 2015-05-26 14:16:25,910 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-648551470-67.195.81.153-1432649783454: 13ms
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-26 14:16:25,911 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454/current/replicas> doesn't exist 
     [exec] 2015-05-26 14:16:25,911 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454/current/replicas> doesn't exist 
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 1ms
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-26 14:16:25,912 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 1ms
     [exec] 2015-05-26 14:16:25,913 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 beginning handshake with NN
     [exec] 2015-05-26 14:16:25,922 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:52030, datanodeUuid=f7278dd1-3fbc-45fd-be83-aa123f099904, infoPort=36631, infoSecurePort=0, ipcPort=43648, storageInfo=lv=-56;cid=testClusterID;nsid=1443975048;c=0) storage f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,922 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-26 14:16:25,923 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:52030
     [exec] 2015-05-26 14:16:25,925 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-26 14:16:25,925 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-26 14:16:25,929 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 successfully registered with NN
     [exec] 2015-05-26 14:16:25,929 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:56314 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-26 14:16:25,940 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-26 14:16:25,940 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3 for DN 127.0.0.1:52030
     [exec] 2015-05-26 14:16:25,941 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c for DN 127.0.0.1:52030
     [exec] 2015-05-26 14:16:25,950 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-26 14:16:25,950 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314
     [exec] 2015-05-26 14:16:25,963 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c from datanode f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,964 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c node DatanodeRegistration(127.0.0.1:52030, datanodeUuid=f7278dd1-3fbc-45fd-be83-aa123f099904, infoPort=36631, infoSecurePort=0, ipcPort=43648, storageInfo=lv=-56;cid=testClusterID;nsid=1443975048;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-26 14:16:25,964 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3 from datanode f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,965 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3 node DatanodeRegistration(127.0.0.1:52030, datanodeUuid=f7278dd1-3fbc-45fd-be83-aa123f099904, infoPort=36631, infoSecurePort=0, ipcPort=43648, storageInfo=lv=-56;cid=testClusterID;nsid=1443975048;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-26 14:16:25,989 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xc5f894496d527836,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 36 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-26 14:16:25,989 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:26,034 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-26 14:16:26,046 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-26 14:16:26,046 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-26 14:16:26,046 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-26 14:16:26,047 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-26 14:16:26,048 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-26 14:16:26,160 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 43648
     [exec] 2015-05-26 14:16:26,161 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 43648
     [exec] 2015-05-26 14:16:26,161 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 interrupted
     [exec] 2015-05-26 14:16:26,161 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-26 14:16:26,162 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314
     [exec] 2015-05-26 14:16:26,265 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904)
     [exec] 2015-05-26 14:16:26,266 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:26,267 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-26 14:16:26,267 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-26 14:16:26,267 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-26 14:16:26,268 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-26 14:16:26,273 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-26 14:16:26,273 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-26 14:16:26,273 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-26 14:16:26,274 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-26 14:16:26,274 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 2 2 
     [exec] 2015-05-26 14:16:26,274 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-26 14:16:26,275 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-26 14:16:26,276 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-26 14:16:26,277 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 56314
     [exec] 2015-05-26 14:16:26,278 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 56314
     [exec] 2015-05-26 14:16:26,278 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-26 14:16:26,278 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-26 14:16:26,313 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-26 14:16:26,313 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-26 14:16:26,315 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-26 14:16:26,415 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-26 14:16:26,417 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-26 14:16:26,417 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.723 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:43 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-05-26T14:18:46+00:00
[INFO] Final Memory: 55M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362687 bytes
Compression is 0.0%
Took 6.3 sec
Recording test results
Updating YARN-2336
Updating HADOOP-8751
Updating YARN-2238

Hadoop-Hdfs-trunk - Build # 2137 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2137/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8073 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.723 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:43 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-05-26T14:18:46+00:00
[INFO] Final Memory: 55M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362687 bytes
Compression is 0.0%
Took 6.3 sec
Recording test results
Updating YARN-2336
Updating HADOOP-8751
Updating YARN-2238
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2136

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2136/changes>

Changes:

[wheat9] HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang.

------------------------------------------
[...truncated 7811 lines...]
java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:327)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:90)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:606)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:456)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:485)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:481)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:375)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:366)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:359)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:352)
	at org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs(TestEncryptionZones.java:621)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 15, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 130.297 sec - in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.311 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.189 sec - in org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.901 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.92 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.571 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.311 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.25 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.74 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.097 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.4 sec - in org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.482 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.933 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.908 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.304 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.087 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.361 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.062 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.402 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.101 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.964 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.047 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.515 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.218 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.653 sec - in org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.324 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.393 sec - in org.apache.hadoop.hdfs.TestPipelines
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.553 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.91 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.9 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.402 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.359 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.669 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.634 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.66 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.995 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.508 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.103 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.864 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.384 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.309 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.232 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.475 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Tests in error: 
  TestFileTruncate.testTruncateFailure » IO Failed to replace a bad datanode on ...
  TestFileTruncate.testSnapshotWithAppendTruncate » IO Failed to replace a bad d...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestEncryptionZonesWithKMS>TestEncryptionZones.testReadWriteUsingWebHdfs:621 » SocketTimeout

Tests run: 3438, Failures: 0, Errors: 12, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.255 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-25T14:17:21+00:00
[INFO] Final Memory: 67M/697M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362814 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating HDFS-8377

Hadoop-Hdfs-trunk - Build # 2136 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2136/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8004 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.255 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-25T14:17:21+00:00
[INFO] Final Memory: 67M/697M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362814 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating HDFS-8377
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
12 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testReadWriteUsingWebHdfs

Error Message:
Read timed out

Stack Trace:
java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:327)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:90)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:606)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:456)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:485)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:481)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:375)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:366)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:359)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:352)
	at org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs(TestEncryptionZones.java:621)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testSnapshotWithAppendTruncate

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK], DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK], DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testCopyOnTruncateWithDataNodesRestart

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testSnapshotWithTruncates

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateRecovery

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateShellCommandOnBlockBoundary

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testSnapshotTruncateThenDeleteSnapshot

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateEditLogLoad

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2135

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2135/>

------------------------------------------
[...truncated 7886 lines...]
     [exec] 2015-05-24 14:17:47,502 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(284)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
     [exec] 2015-05-24 14:17:47,502 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.datanode is not defined
     [exec] 2015-05-24 14:17:47,503 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
     [exec] 2015-05-24 14:17:47,504 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-24 14:17:47,504 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-24 14:17:47,506 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 57485
     [exec] 2015-05-24 14:17:47,506 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-24 14:17:47,557 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:57485
     [exec] 2015-05-24 14:17:47,678 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(162)) - Listening HTTP traffic on /127.0.0.1:60590
     [exec] 2015-05-24 14:17:47,680 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-24 14:17:47,680 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-24 14:17:47,693 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-24 14:17:47,694 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 48866
     [exec] 2015-05-24 14:17:47,701 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:48866
     [exec] 2015-05-24 14:17:47,713 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-24 14:17:47,715 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-24 14:17:47,725 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:48928 starting to offer service
     [exec] 2015-05-24 14:17:47,732 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-24 14:17:47,732 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 48866: starting
     [exec] 2015-05-24 14:17:48,174 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 25938@asf904.gq1.ygridcore.net
     [exec] 2015-05-24 14:17:48,174 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,174 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-24 14:17:48,223 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862>
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-149846206-67.195.81.148-1432477065862 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862/current>
     [exec] 2015-05-24 14:17:48,227 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 25938@asf904.gq1.ygridcore.net
     [exec] 2015-05-24 14:17:48,227 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,227 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862>
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-149846206-67.195.81.148-1432477065862 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862/current>
     [exec] 2015-05-24 14:17:48,271 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=407141728;bpid=BP-149846206-67.195.81.148-1432477065862;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=407141728;c=0;bpid=BP-149846206-67.195.81.148-1432477065862;dnuuid=null
     [exec] 2015-05-24 14:17:48,272 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,294 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5
     [exec] 2015-05-24 14:17:48,294 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-24 14:17:48,295 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-392e90f7-b807-49fa-a540-7f799afac17f
     [exec] 2015-05-24 14:17:48,295 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-24 14:17:48,299 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-24 14:17:48,306 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432495683306 with interval 21600000
     [exec] 2015-05-24 14:17:48,306 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,306 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-24 14:17:48,307 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-24 14:17:48,320 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-149846206-67.195.81.148-1432477065862 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 14ms
     [exec] 2015-05-24 14:17:48,320 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-149846206-67.195.81.148-1432477065862 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 13ms
     [exec] 2015-05-24 14:17:48,320 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-149846206-67.195.81.148-1432477065862: 14ms
     [exec] 2015-05-24 14:17:48,321 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-24 14:17:48,321 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862/current/replicas> doesn't exist 
     [exec] 2015-05-24 14:17:48,321 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-24 14:17:48,322 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-24 14:17:48,322 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862/current/replicas> doesn't exist 
     [exec] 2015-05-24 14:17:48,322 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-24 14:17:48,322 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-24 14:17:48,324 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 beginning handshake with NN
     [exec] 2015-05-24 14:17:48,335 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:58314, datanodeUuid=7f2bd5fe-b13a-4066-8bc6-5a9476738d19, infoPort=60590, infoSecurePort=0, ipcPort=48866, storageInfo=lv=-56;cid=testClusterID;nsid=407141728;c=0) storage 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,335 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-24 14:17:48,336 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,339 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 successfully registered with NN
     [exec] 2015-05-24 14:17:48,339 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:48928 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-24 14:17:48,348 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2332)) - No heartbeat from DataNode: 127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,348 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-24 14:17:48,349 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-24 14:17:48,349 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5 for DN 127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,350 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-392e90f7-b807-49fa-a540-7f799afac17f for DN 127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,358 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-24 14:17:48,358 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928
     [exec] 2015-05-24 14:17:48,370 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-392e90f7-b807-49fa-a540-7f799afac17f from datanode 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,370 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-392e90f7-b807-49fa-a540-7f799afac17f node DatanodeRegistration(127.0.0.1:58314, datanodeUuid=7f2bd5fe-b13a-4066-8bc6-5a9476738d19, infoPort=60590, infoSecurePort=0, ipcPort=48866, storageInfo=lv=-56;cid=testClusterID;nsid=407141728;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-24 14:17:48,371 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5 from datanode 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,371 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5 node DatanodeRegistration(127.0.0.1:58314, datanodeUuid=7f2bd5fe-b13a-4066-8bc6-5a9476738d19, infoPort=60590, infoSecurePort=0, ipcPort=48866, storageInfo=lv=-56;cid=testClusterID;nsid=407141728;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs
     [exec] 2015-05-24 14:17:48,386 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x113e6b703b11537a,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 25 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-24 14:17:48,386 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,454 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-24 14:17:48,461 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-24 14:17:48,462 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-24 14:17:48,462 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-24 14:17:48,462 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-24 14:17:48,464 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-24 14:17:48,574 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 48866
     [exec] 2015-05-24 14:17:48,575 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 48866
     [exec] 2015-05-24 14:17:48,575 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 interrupted
     [exec] 2015-05-24 14:17:48,575 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-24 14:17:48,575 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928
     [exec] 2015-05-24 14:17:48,679 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19)
     [exec] 2015-05-24 14:17:48,679 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,681 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-24 14:17:48,682 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-24 14:17:48,682 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-24 14:17:48,682 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-24 14:17:48,687 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-24 14:17:48,687 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-24 14:17:48,687 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-24 14:17:48,687 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-24 14:17:48,688 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 1 
     [exec] 2015-05-24 14:17:48,688 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-24 14:17:48,689 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-24 14:17:48,690 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-24 14:17:48,692 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 48928
     [exec] 2015-05-24 14:17:48,692 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 48928
     [exec] 2015-05-24 14:17:48,692 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-24 14:17:48,692 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-24 14:17:48,719 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-24 14:17:48,719 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-24 14:17:48,720 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-24 14:17:48,821 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-24 14:17:48,822 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-24 14:17:48,823 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.530 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.072 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-24T14:20:12+00:00
[INFO] Final Memory: 54M/685M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362716 bytes
Compression is 0.0%
Took 6.9 sec
Recording test results

Build failed in Jenkins: Hadoop-Hdfs-trunk #2134

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2134/changes>

Changes:

[ozawa] MAPREDUCE-6204. TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS.

[cmccabe] HADOOP-11927.  Fix "undefined reference to dlopen" error when compiling libhadooppipes (Xianyin Xin via Colin P. McCabe)

[xgong] YARN-3701. Isolating the error of generating a single app report when

[jianhe] YARN-3707. RM Web UI queue filter doesn't work. Contributed by Wangda Tan

------------------------------------------
[...truncated 6651 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.189 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.858 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.455 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.79 sec - in org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.672 sec - in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestTokenAspect
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.45 sec - in org.apache.hadoop.hdfs.web.TestTokenAspect
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.213 sec - in org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.169 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.702 sec - in org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.095 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.134 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec - in org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.092 sec - in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 28.621 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.925 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.942 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.515 sec - in org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.882 sec - in org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.2 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.365 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Running org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.399 sec - in org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.027 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.684 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.931 sec - in org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.466 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.887 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.939 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.728 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.489 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.643 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.775 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.917 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.196 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.776 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.602 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.44 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.435 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.396 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.796 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.555 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.323 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.527 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.687 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.504 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.589 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.955 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.603 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.663 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.921 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.452 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.518 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.66 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.164 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.507 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.844 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.359 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.577 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.848 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.975 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.878 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.857 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.177 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.986 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.067 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.836 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.101 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.633 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.466 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.94 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.895 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.392 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.024 sec - in org.apache.hadoop.TestGenericRefresh

Results :

Tests in error: 
  TestAppendSnapshotTruncate.testAST:128 » IllegalState dir has ERROR

Tests run: 3437, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.291 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.068 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-23T14:16:49+00:00
[INFO] Final Memory: 60M/719M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362646 bytes
Compression is 0.0%
Took 34 sec
Recording test results
Updating YARN-3701
Updating YARN-3707
Updating MAPREDUCE-6204
Updating HADOOP-11927

Build failed in Jenkins: Hadoop-Hdfs-trunk #2133

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2133/changes>

Changes:

[aajisaka] YARN-3694. Fix dead link for TimelineServer REST API. Contributed by Jagadesh Kiran N.

[devaraj] YARN-3646. Applications are getting stuck some times in case of retry

[wheat9] HDFS-8421. Move startFile() and related functions into FSDirWriteFileOp. Contributed by Haohui Mai.

[xyao] HDFS-8451. DFSClient probe for encryption testing interprets empty URI property for enabled. Contributed by Steve Loughran.

[kasha] YARN-3675. FairScheduler: RM quits when node removal races with continuous-scheduling on the same node. (Anubhav Dhoot via kasha)

[jghoman] HADOOP-12016. Typo in FileSystem::listStatusIterator. Contributed by Arthur Vigil.

[vinodkv] YARN-3684. Changed ContainerExecutor's primary lifecycle methods to use a more extensible mechanism of context objects. Contributed by Sidharta Seethana.

[arp] HDFS-8454. Remove unnecessary throttling in TestDatanodeDeath. (Arpit Agarwal)

[aajisaka] HADOOP-12014. hadoop-config.cmd displays a wrong error message. Contributed by Kengo Seki.

[aajisaka] HADOOP-11955. Fix a typo in the cluster setup doc. Contributed by Yanjun Wang.

[aajisaka] HADOOP-11594. Improve the readability of site index of documentation. Contributed by Masatake Iwasaki.

[vinayakumarb] HDFS-8268. Port conflict log for data node server is not sufficient (Contributed by Mohammad Shahid Khan)

[junping_du] YARN-3594. WintuilsProcessStubExecutor.startStreamReader leaks streams. Contributed by Lars Francke.

[vinayakumarb] HADOOP-11743. maven doesn't clean all the site files (Contributed by ramtin)

------------------------------------------
[...truncated 6640 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.763 sec - in org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.374 sec - in org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.962 sec - in org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.819 sec - in org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.594 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.581 sec - in org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.71 sec - in org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.965 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.439 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.248 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.889 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.16 sec - in org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.957 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.752 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.229 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.888 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.254 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.942 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.608 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.616 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.264 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.695 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.341 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.132 sec - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.932 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.029 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.046 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 179.722 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.139 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.44 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.838 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.063 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.138 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.951 sec - in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.448 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.154 sec - in org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.134 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.14 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.574 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.424 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.495 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.495 sec - in org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.254 sec - in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.387 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.566 sec - in org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.701 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.609 sec - in org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.871 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.828 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.814 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.143 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.406 sec - in org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.253 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.241 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.885 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.82 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.022 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.019 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.577 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.112 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.844 sec - in org.apache.hadoop.security.TestRefreshUserMappings

Results :

Tests in error: 
  TestFileTruncate.testTruncateFailure » IO Failed to replace a bad datanode on ...

Tests run: 3437, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.837 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-22T14:20:00+00:00
[INFO] Final Memory: 61M/678M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363209 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675