You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2014/10/09 15:56:01 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #1896

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1896/changes>

Changes:

[jianhe] YARN-2649. Fixed TestAMRMRPCNodeUpdates test failure. Contributed by Ming Ma

[stevel] YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under hadoop-yarn

[kihwal] HDFS-7203. Concurrent appending to the same file can cause data

[cmccabe] CHANGES.txt: move HADOOP-10404 underneath branch 2.6

[cnauroth] HADOOP-10809. hadoop-azure: page blob support. Contributed by Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, and Mike Liddell.

[zjshen] HADOOP-11179. Java untar should handle the case that the file entry comes without its parent directory entry. Contributed by Craig Welch.

[mayank] YARN-2598 GHS should show N/A instead of null for the inaccessible information  (Zhijie Shen via mayank)

[cmccabe] HDFS-7202. Should be able to omit package name of SpanReceiver on "hadoop trace -add" (iwasakims via cmccabe)

[atm] HADOOP-11161. Expose close method in KeyProvider to give clients of Provider implementations a hook to release resources. Contribued by Arun Suresh.

------------------------------------------
[...truncated 6042 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.09 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.374 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.042 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.22 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.676 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.647 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.165 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.987 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.35 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.818 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.81 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.662 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 377.346 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.402 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.698 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.187 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 153.089 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.361 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.69 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.319 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.262 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.735 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.807 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.566 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.605 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.416 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.593 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.513 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.626 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.237 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.705 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.505 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.622 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.735 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.348 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.104 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.433 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.818 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.825 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.625 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.853 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.82 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.124 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.898 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.248 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.791 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.893 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.469 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.28 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.785 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.831 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.714 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.852 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.206 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.427 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.82 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.27 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.141 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.873 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.864 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.253 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.843 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.701 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.435 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.465 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.943 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.195 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.881 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.737 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.983 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.679 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDataNodeMetrics.testReceivePacketMetrics:128 Test resulted in an unexpected exit

Tests in error: 
  TestDataNodeMetrics.testSendDataPacketMetrics:94 » NoClassDefFound org/apache/...
  TestDataNodeMetrics.testRoundTripAckMetric:185 » NoClassDefFound org/apache/ha...
  TestDataNodeMetrics.testDataNodeMetrics:49 » NoClassDefFound org/apache/hadoop...
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3133, Failures: 1, Errors: 4, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:20 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.238 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:20 h
[INFO] Finished at: 2014-10-09T13:55:06+00:00
[INFO] Final Memory: 45M/877M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-10404
Updating HDFS-7202
Updating HDFS-7203
Updating YARN-2652
Updating YARN-2598
Updating YARN-2649
Updating YARN-913
Updating HADOOP-10809
Updating HADOOP-11179
Updating HADOOP-11161

Hadoop-Hdfs-trunk - Build # 1897 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 9790 lines...]
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.190 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-10T13:56:10+00:00
[INFO] Final Memory: 63M/851M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-1492
Updating MAPREDUCE-6123
Updating HADOOP-11133
Updating MAPREDUCE-6122
Updating YARN-2662
Updating YARN-2493
Updating YARN-2180
Updating YARN-2544
Updating HDFS-7195
Updating HADOOP-11174
Updating HADOOP-11175
Updating HADOOP-11184
Updating YARN-2629
Updating YARN-2671
Updating YARN-2617
Updating HDFS-7217
Updating HDFS-7026
Updating YARN-2583
Updating HADOOP-11178
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
7 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode

Error Message:
org/apache/hadoop/http/HttpServer2$Builder

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/http/HttpServer2$Builder
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at org.apache.hadoop.hdfs.DFSUtil.httpServerTemplateForNNAndJN(DFSUtil.java:1731)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:121)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:765)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:749)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1440)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1104)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:975)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:804)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode(TestDeadDatanode.java:98)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.cleanup(TestDeadDatanode.java:60)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)


REGRESSION:  org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks

Error Message:
timed out to get expected spans: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2014-10-10 01:50:44,115

"java.util.concurrent.ThreadPoolExecutor$Worker@db951f2" daemon prio=5 tid=144 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 48798" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server listener on 48798" daemon prio=5 tid=24 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"java.util.concurrent.ThreadPoolExecutor$Worker@4145582" daemon prio=5 tid=143 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"2012664547@qtp-661890634-1 - Acceptor0 SelectChannelConnector@localhost:52834" daemon prio=5 tid=83 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 34034" daemon prio=5 tid=125 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 49134" daemon prio=5 tid=99 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 3 on 48798" daemon prio=5 tid=39 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Socket Reader #1 for port 34034"  prio=5 tid=115 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Socket Reader #1 for port 49134"  prio=5 tid=87 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 34034" daemon prio=5 tid=114 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"Thread-82" daemon prio=5 tid=107 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 49134" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"225005258@qtp-421092395-1 - Acceptor0 SelectChannelConnector@localhost:33380" daemon prio=5 tid=18 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@698f352" daemon prio=5 tid=106 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@651ee017" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:392)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@feeb372" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=117 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@7cd47880" daemon prio=5 tid=131 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 4 on 48798" daemon prio=5 tid=40 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3d6721bd" daemon prio=5 tid=34 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 49134" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 48798" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-100" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 7 on 38252" daemon prio=5 tid=73 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 48798" daemon prio=5 tid=41 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server Responder" daemon prio=5 tid=64 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"CacheReplicationMonitor(405305361)"  prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2116)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=168 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@314af9f7" daemon prio=5 tid=35 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor.run(DecommissionManager.java:76)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=89 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 4 on 38252" daemon prio=5 tid=70 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=111 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@39b1ff47" daemon prio=5 tid=130 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5ec736e4" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3509)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 48798" daemon prio=5 tid=26 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server idle connection scanner for port 34034" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"Socket Reader #1 for port 48798"  prio=5 tid=25 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Timer-1" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 4 on 49134" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 34034" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 34034" daemon prio=5 tid=126 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 34034" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"LeaseRenewer:jenkins@localhost:48798" daemon prio=5 tid=177 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:438)
        at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
        at java.lang.Thread.run(Thread.java:662)
"Socket Reader #1 for port 38252"  prio=5 tid=62 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"357393821@qtp-770900021-0" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=15 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 2 on 34034" daemon prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 2 on 49134" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 49134" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=167 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"java.util.concurrent.ThreadPoolExecutor$Worker@6b6c14c0" daemon prio=5 tid=142 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-105" daemon prio=5 tid=134 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"253380575@qtp-623107838-0" daemon prio=5 tid=108 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 9 on 34034" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-2" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 1 on 48798" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-60" daemon prio=5 tid=81 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 38252" daemon prio=5 tid=75 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@72bdec44" daemon prio=5 tid=132 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 49134" daemon prio=5 tid=86 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"IPC Server idle connection scanner for port 38252" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 0 on 38252" daemon prio=5 tid=66 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@711b50a1" daemon prio=5 tid=60 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=27 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 9 on 49134" daemon prio=5 tid=101 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-39" daemon prio=5 tid=56 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 38252" daemon prio=5 tid=68 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1530)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:86)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:72)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:123)
        at org.apache.hadoop.tracing.TestTracing.assertSpanNamesFound(TestTracing.java:246)
        at org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks(TestTracing.java:96)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"IPC Server handler 6 on 38252" daemon prio=5 tid=72 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"845987535@qtp-623107838-1 - Acceptor0 SelectChannelConnector@localhost:49790" daemon prio=5 tid=109 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@25a0d346" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:5178)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 49134" daemon prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server listener on 38252" daemon prio=5 tid=61 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3589c12a" daemon prio=5 tid=55 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@68e4e358" daemon prio=5 tid=23 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:327)
        at java.lang.Thread.run(Thread.java:662)
"1634052921@qtp-661890634-0" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"1031841975@qtp-421092395-0" daemon prio=5 tid=17 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3804dd1b" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=175 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"IPC Server handler 0 on 48798" daemon prio=5 tid=36 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 34034" daemon prio=5 tid=127 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-0" daemon prio=5 tid=19 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7860e390" daemon prio=5 tid=80 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 38252" daemon prio=5 tid=74 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"pool-1-thread-1"  prio=5 tid=20 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@56bebb88" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5245)
        at java.lang.Thread.run(Thread.java:662)
"Timer-3" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=165 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 49134" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 4 on 34034" daemon prio=5 tid=123 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"1525518005@qtp-770900021-1 - Acceptor0 SelectChannelConnector@localhost:55443" daemon prio=5 tid=58 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:171)
"Thread-106" daemon prio=5 tid=133 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 49134" daemon prio=5 tid=93 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 2 on 48798" daemon prio=5 tid=38 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 38252" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 48798" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 6 on 48798" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-104" daemon prio=5 tid=135 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 38252" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 34034" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1294aa42" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:5139)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@9bed3d1" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 38252" daemon prio=5 tid=69 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 49134" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=163 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=164 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 34034" daemon prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=166 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)



Stack Trace:
java.lang.AssertionError: timed out to get expected spans: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2014-10-10 01:50:44,115

"java.util.concurrent.ThreadPoolExecutor$Worker@db951f2" daemon prio=5 tid=144 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 48798" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server listener on 48798" daemon prio=5 tid=24 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"java.util.concurrent.ThreadPoolExecutor$Worker@4145582" daemon prio=5 tid=143 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"2012664547@qtp-661890634-1 - Acceptor0 SelectChannelConnector@localhost:52834" daemon prio=5 tid=83 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 34034" daemon prio=5 tid=125 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 49134" daemon prio=5 tid=99 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 3 on 48798" daemon prio=5 tid=39 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Socket Reader #1 for port 34034"  prio=5 tid=115 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Socket Reader #1 for port 49134"  prio=5 tid=87 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 34034" daemon prio=5 tid=114 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"Thread-82" daemon prio=5 tid=107 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 49134" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"225005258@qtp-421092395-1 - Acceptor0 SelectChannelConnector@localhost:33380" daemon prio=5 tid=18 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@698f352" daemon prio=5 tid=106 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@651ee017" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:392)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@feeb372" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=117 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@7cd47880" daemon prio=5 tid=131 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 4 on 48798" daemon prio=5 tid=40 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3d6721bd" daemon prio=5 tid=34 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 49134" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 48798" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-100" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 7 on 38252" daemon prio=5 tid=73 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 48798" daemon prio=5 tid=41 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server Responder" daemon prio=5 tid=64 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"CacheReplicationMonitor(405305361)"  prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2116)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=168 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@314af9f7" daemon prio=5 tid=35 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor.run(DecommissionManager.java:76)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=89 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 4 on 38252" daemon prio=5 tid=70 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=111 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@39b1ff47" daemon prio=5 tid=130 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5ec736e4" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3509)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 48798" daemon prio=5 tid=26 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server idle connection scanner for port 34034" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"Socket Reader #1 for port 48798"  prio=5 tid=25 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Timer-1" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 4 on 49134" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 34034" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 34034" daemon prio=5 tid=126 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 34034" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"LeaseRenewer:jenkins@localhost:48798" daemon prio=5 tid=177 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:438)
        at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
        at java.lang.Thread.run(Thread.java:662)
"Socket Reader #1 for port 38252"  prio=5 tid=62 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"357393821@qtp-770900021-0" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=15 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 2 on 34034" daemon prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 2 on 49134" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 49134" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=167 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"java.util.concurrent.ThreadPoolExecutor$Worker@6b6c14c0" daemon prio=5 tid=142 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-105" daemon prio=5 tid=134 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"253380575@qtp-623107838-0" daemon prio=5 tid=108 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 9 on 34034" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-2" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 1 on 48798" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-60" daemon prio=5 tid=81 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 38252" daemon prio=5 tid=75 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@72bdec44" daemon prio=5 tid=132 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 49134" daemon prio=5 tid=86 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"IPC Server idle connection scanner for port 38252" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 0 on 38252" daemon prio=5 tid=66 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@711b50a1" daemon prio=5 tid=60 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=27 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 9 on 49134" daemon prio=5 tid=101 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-39" daemon prio=5 tid=56 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 38252" daemon prio=5 tid=68 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1530)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:86)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:72)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:123)
        at org.apache.hadoop.tracing.TestTracing.assertSpanNamesFound(TestTracing.java:246)
        at org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks(TestTracing.java:96)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"IPC Server handler 6 on 38252" daemon prio=5 tid=72 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"845987535@qtp-623107838-1 - Acceptor0 SelectChannelConnector@localhost:49790" daemon prio=5 tid=109 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@25a0d346" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:5178)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 49134" daemon prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server listener on 38252" daemon prio=5 tid=61 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3589c12a" daemon prio=5 tid=55 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@68e4e358" daemon prio=5 tid=23 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:327)
        at java.lang.Thread.run(Thread.java:662)
"1634052921@qtp-661890634-0" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"1031841975@qtp-421092395-0" daemon prio=5 tid=17 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3804dd1b" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=175 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"IPC Server handler 0 on 48798" daemon prio=5 tid=36 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 34034" daemon prio=5 tid=127 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-0" daemon prio=5 tid=19 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7860e390" daemon prio=5 tid=80 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 38252" daemon prio=5 tid=74 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"pool-1-thread-1"  prio=5 tid=20 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@56bebb88" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5245)
        at java.lang.Thread.run(Thread.java:662)
"Timer-3" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=165 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 49134" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 4 on 34034" daemon prio=5 tid=123 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"1525518005@qtp-770900021-1 - Acceptor0 SelectChannelConnector@localhost:55443" daemon prio=5 tid=58 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:171)
"Thread-106" daemon prio=5 tid=133 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 49134" daemon prio=5 tid=93 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 2 on 48798" daemon prio=5 tid=38 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 38252" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 48798" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 6 on 48798" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-104" daemon prio=5 tid=135 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 38252" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 34034" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1294aa42" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:5139)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@9bed3d1" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 38252" daemon prio=5 tid=69 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 49134" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=163 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=164 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 34034" daemon prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=166 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)


	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.tracing.TestTracing.assertSpanNamesFound(TestTracing.java:259)
	at org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks(TestTracing.java:96)


REGRESSION:  org.apache.hadoop.tracing.TestTracing.testReadTraceHooks

Error Message:
timed out to get expected spans: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2014-10-10 01:50:45,345

"java.util.concurrent.ThreadPoolExecutor$Worker@db951f2" daemon prio=5 tid=144 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 48798" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server listener on 48798" daemon prio=5 tid=24 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"java.util.concurrent.ThreadPoolExecutor$Worker@4145582" daemon prio=5 tid=143 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"2012664547@qtp-661890634-1 - Acceptor0 SelectChannelConnector@localhost:52834" daemon prio=5 tid=83 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 34034" daemon prio=5 tid=125 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 49134" daemon prio=5 tid=99 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 3 on 48798" daemon prio=5 tid=39 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Socket Reader #1 for port 34034"  prio=5 tid=115 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Socket Reader #1 for port 49134"  prio=5 tid=87 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 34034" daemon prio=5 tid=114 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"Thread-82" daemon prio=5 tid=107 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 49134" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"225005258@qtp-421092395-1 - Acceptor0 SelectChannelConnector@localhost:33380" daemon prio=5 tid=18 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@698f352" daemon prio=5 tid=106 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@651ee017" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:392)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@feeb372" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"DataXceiver for client DFSClient_NONMAPREDUCE_2133047051_1 at /127.0.0.1:59374 [Waiting for operation #6]" daemon prio=5 tid=284 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
        at java.io.DataInputStream.readShort(DataInputStream.java:295)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=117 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@7cd47880" daemon prio=5 tid=131 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 4 on 48798" daemon prio=5 tid=40 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3d6721bd" daemon prio=5 tid=34 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 49134" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 48798" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-100" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 7 on 38252" daemon prio=5 tid=73 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 48798" daemon prio=5 tid=41 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server Responder" daemon prio=5 tid=64 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"CacheReplicationMonitor(405305361)"  prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2116)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=168 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@314af9f7" daemon prio=5 tid=35 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor.run(DecommissionManager.java:76)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=89 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 4 on 38252" daemon prio=5 tid=70 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=111 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@39b1ff47" daemon prio=5 tid=130 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5ec736e4" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3509)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 48798" daemon prio=5 tid=26 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server idle connection scanner for port 34034" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"Socket Reader #1 for port 48798"  prio=5 tid=25 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Timer-1" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 4 on 49134" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 34034" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 34034" daemon prio=5 tid=126 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 34034" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"LeaseRenewer:jenkins@localhost:48798" daemon prio=5 tid=177 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:438)
        at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
        at java.lang.Thread.run(Thread.java:662)
"Socket Reader #1 for port 38252"  prio=5 tid=62 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"357393821@qtp-770900021-0" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=15 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 2 on 34034" daemon prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 2 on 49134" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 49134" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=167 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"java.util.concurrent.ThreadPoolExecutor$Worker@6b6c14c0" daemon prio=5 tid=142 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-105" daemon prio=5 tid=134 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"253380575@qtp-623107838-0" daemon prio=5 tid=108 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 9 on 34034" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-2" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 1 on 48798" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-60" daemon prio=5 tid=81 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 38252" daemon prio=5 tid=75 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@72bdec44" daemon prio=5 tid=132 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 49134" daemon prio=5 tid=86 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"IPC Server idle connection scanner for port 38252" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 0 on 38252" daemon prio=5 tid=66 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@711b50a1" daemon prio=5 tid=60 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=27 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 9 on 49134" daemon prio=5 tid=101 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-39" daemon prio=5 tid=56 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 38252" daemon prio=5 tid=68 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1530)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:86)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:72)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:123)
        at org.apache.hadoop.tracing.TestTracing.assertSpanNamesFound(TestTracing.java:246)
        at org.apache.hadoop.tracing.TestTracing.testReadTraceHooks(TestTracing.java:168)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"IPC Server handler 6 on 38252" daemon prio=5 tid=72 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"845987535@qtp-623107838-1 - Acceptor0 SelectChannelConnector@localhost:49790" daemon prio=5 tid=109 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@25a0d346" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:5178)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 49134" daemon prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server listener on 38252" daemon prio=5 tid=61 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3589c12a" daemon prio=5 tid=55 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@68e4e358" daemon prio=5 tid=23 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:327)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.PeerCache@6197cc" daemon prio=5 tid=285 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:244)
        at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41)
        at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119)
        at java.lang.Thread.run(Thread.java:662)
"1634052921@qtp-661890634-0" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"1031841975@qtp-421092395-0" daemon prio=5 tid=17 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3804dd1b" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=175 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"IPC Server handler 0 on 48798" daemon prio=5 tid=36 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 34034" daemon prio=5 tid=127 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-0" daemon prio=5 tid=19 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7860e390" daemon prio=5 tid=80 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 38252" daemon prio=5 tid=74 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"pool-1-thread-1"  prio=5 tid=20 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@56bebb88" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5245)
        at java.lang.Thread.run(Thread.java:662)
"Timer-3" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=165 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 49134" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 4 on 34034" daemon prio=5 tid=123 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"1525518005@qtp-770900021-1 - Acceptor0 SelectChannelConnector@localhost:55443" daemon prio=5 tid=58 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:171)
"Thread-106" daemon prio=5 tid=133 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 48798" daemon prio=5 tid=38 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 49134" daemon prio=5 tid=93 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 38252" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 48798" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 6 on 48798" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-104" daemon prio=5 tid=135 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 38252" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 34034" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1294aa42" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:5139)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@9bed3d1" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 38252" daemon prio=5 tid=69 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 49134" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=163 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=164 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 34034" daemon prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=166 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)



Stack Trace:
java.lang.AssertionError: timed out to get expected spans: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2014-10-10 01:50:45,345

"java.util.concurrent.ThreadPoolExecutor$Worker@db951f2" daemon prio=5 tid=144 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 48798" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server listener on 48798" daemon prio=5 tid=24 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"java.util.concurrent.ThreadPoolExecutor$Worker@4145582" daemon prio=5 tid=143 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"2012664547@qtp-661890634-1 - Acceptor0 SelectChannelConnector@localhost:52834" daemon prio=5 tid=83 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 34034" daemon prio=5 tid=125 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 49134" daemon prio=5 tid=99 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 3 on 48798" daemon prio=5 tid=39 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Socket Reader #1 for port 34034"  prio=5 tid=115 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Socket Reader #1 for port 49134"  prio=5 tid=87 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 34034" daemon prio=5 tid=114 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"Thread-82" daemon prio=5 tid=107 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 49134" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]  heartbeating to localhost/127.0.0.1:48798" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:736)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
        at java.lang.Thread.run(Thread.java:662)
"225005258@qtp-421092395-1 - Acceptor0 SelectChannelConnector@localhost:33380" daemon prio=5 tid=18 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@698f352" daemon prio=5 tid=106 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@651ee017" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:392)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@feeb372" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"DataXceiver for client DFSClient_NONMAPREDUCE_2133047051_1 at /127.0.0.1:59374 [Waiting for operation #6]" daemon prio=5 tid=284 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
        at java.io.DataInputStream.readShort(DataInputStream.java:295)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:202)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=117 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@7cd47880" daemon prio=5 tid=131 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 4 on 48798" daemon prio=5 tid=40 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3d6721bd" daemon prio=5 tid=34 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 6 on 49134" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 48798" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-100" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 7 on 38252" daemon prio=5 tid=73 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 48798" daemon prio=5 tid=41 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server Responder" daemon prio=5 tid=64 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"CacheReplicationMonitor(405305361)"  prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2116)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=168 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@314af9f7" daemon prio=5 tid=35 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor.run(DecommissionManager.java:76)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=89 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 4 on 38252" daemon prio=5 tid=70 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=111 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@39b1ff47" daemon prio=5 tid=130 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@5ec736e4" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3509)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 48798" daemon prio=5 tid=26 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server idle connection scanner for port 34034" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"Socket Reader #1 for port 48798"  prio=5 tid=25 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"Timer-1" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 4 on 49134" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 34034" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 34034" daemon prio=5 tid=126 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 34034" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"LeaseRenewer:jenkins@localhost:48798" daemon prio=5 tid=177 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:438)
        at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
        at java.lang.Thread.run(Thread.java:662)
"Socket Reader #1 for port 38252"  prio=5 tid=62 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:631)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:610)
"357393821@qtp-770900021-0" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=15 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 2 on 34034" daemon prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 2 on 49134" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 49134" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data6/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=167 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"java.util.concurrent.ThreadPoolExecutor$Worker@6b6c14c0" daemon prio=5 tid=142 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"Thread-105" daemon prio=5 tid=134 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"253380575@qtp-623107838-0" daemon prio=5 tid=108 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 9 on 34034" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-2" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 1 on 48798" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-60" daemon prio=5 tid=81 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 9 on 38252" daemon prio=5 tid=75 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter@72bdec44" daemon prio=5 tid=132 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2593)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server listener on 49134" daemon prio=5 tid=86 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"IPC Server idle connection scanner for port 38252" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server handler 0 on 38252" daemon prio=5 tid=66 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@711b50a1" daemon prio=5 tid=60 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server Responder" daemon prio=5 tid=27 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:850)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:833)
"IPC Server handler 9 on 49134" daemon prio=5 tid=101 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-39" daemon prio=5 tid=56 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:459)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 38252" daemon prio=5 tid=68 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1530)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:86)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:72)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:123)
        at org.apache.hadoop.tracing.TestTracing.assertSpanNamesFound(TestTracing.java:246)
        at org.apache.hadoop.tracing.TestTracing.testReadTraceHooks(TestTracing.java:168)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"IPC Server handler 6 on 38252" daemon prio=5 tid=72 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"845987535@qtp-623107838-1 - Acceptor0 SelectChannelConnector@localhost:49790" daemon prio=5 tid=109 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@25a0d346" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:5178)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server idle connection scanner for port 49134" daemon prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"IPC Server listener on 38252" daemon prio=5 tid=61 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:683)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3589c12a" daemon prio=5 tid=55 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@68e4e358" daemon prio=5 tid=23 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:327)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.PeerCache@6197cc" daemon prio=5 tid=285 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:244)
        at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41)
        at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119)
        at java.lang.Thread.run(Thread.java:662)
"1634052921@qtp-661890634-0" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"1031841975@qtp-421092395-0" daemon prio=5 tid=17 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3804dd1b" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Client (795485135) connection to localhost/127.0.0.1:48798 from jenkins" daemon prio=5 tid=175 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:920)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:965)
"IPC Server handler 0 on 48798" daemon prio=5 tid=36 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 8 on 34034" daemon prio=5 tid=127 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Timer-0" daemon prio=5 tid=19 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7860e390" daemon prio=5 tid=80 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:150)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:139)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 38252" daemon prio=5 tid=74 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"pool-1-thread-1"  prio=5 tid=20 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.take(DelayQueue.java:164)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:609)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:602)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@56bebb88" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5245)
        at java.lang.Thread.run(Thread.java:662)
"Timer-3" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        at java.util.TimerThread.run(Timer.java:462)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=165 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 8 on 49134" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 4 on 34034" daemon prio=5 tid=123 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"1525518005@qtp-770900021-1 - Acceptor0 SelectChannelConnector@localhost:55443" daemon prio=5 tid=58 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:171)
"Thread-106" daemon prio=5 tid=133 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 48798" daemon prio=5 tid=38 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 49134" daemon prio=5 tid=93 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 38252" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 48798" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 6 on 48798" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-104" daemon prio=5 tid=135 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 38252" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 34034" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1294aa42" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:5139)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@9bed3d1" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 38252" daemon prio=5 tid=69 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 49134" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=163 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=164 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 34034" daemon prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-890430250-67.195.81.153-1412949040889" daemon prio=5 tid=166 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)


	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.tracing.TestTracing.assertSpanNamesFound(TestTracing.java:259)
	at org.apache.hadoop.tracing.TestTracing.testReadTraceHooks(TestTracing.java:168)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-15
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1898 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6202 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.191 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:22 h
[INFO] Finished at: 2014-10-11T13:57:38+00:00
[INFO] Final Memory: 66M/877M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-6252
Updating HADOOP-11193
Updating YARN-2501
Updating HDFS-7198
Updating YARN-2494
Updating HADOOP-11188
Updating HDFS-7209
Updating HDFS-6657
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-15
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1899 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1899/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6204 lines...]
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:37 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.215 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:37 h
[INFO] Finished at: 2014-10-12T14:12:01+00:00
[INFO] Final Memory: 66M/865M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2668
Updating MAPREDUCE-5875
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-13
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1901 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1901/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6198 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.266 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-14T13:56:47+00:00
[INFO] Final Memory: 66M/851M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2667
Updating YARN-2641
Updating YARN-2651
Updating HDFS-7090
Updating MAPREDUCE-6115
Updating YARN-2308
Updating MAPREDUCE-6125
Updating HDFS-6544
Updating HDFS-7236
Updating YARN-2377
Updating HDFS-7237
Updating YARN-2566
Updating HADOOP-11198
Updating HADOOP-11176
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-15
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1902 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1902/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6291 lines...]

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.222 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-15T13:56:20+00:00
[INFO] Final Memory: 57M/836M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2656
Updating HDFS-7228
Updating HDFS-7201
Updating HADOOP-11181
Updating HDFS-7190
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
14 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testTransitionToActiveWhenOtherNamenodeisActive

Error Message:
org/apache/hadoop/conf/ReconfigurableBase

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/conf/ReconfigurableBase
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
	at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1385)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:825)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.setup(TestDFSHAAdminMiniCluster.java:72)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testTransitionToActiveWhenOtherNamenodeisActive

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.shutdown(TestDFSHAAdminMiniCluster.java:85)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testTryFailoverToSafeMode

Error Message:
org/apache/hadoop/hdfs/server/datanode/DataNode

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/datanode/DataNode
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1385)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:825)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.setup(TestDFSHAAdminMiniCluster.java:72)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testTryFailoverToSafeMode

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.shutdown(TestDFSHAAdminMiniCluster.java:85)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testCheckHealth

Error Message:
org/apache/hadoop/hdfs/server/datanode/DataNode

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/datanode/DataNode
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1385)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:825)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.setup(TestDFSHAAdminMiniCluster.java:72)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testCheckHealth

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.shutdown(TestDFSHAAdminMiniCluster.java:85)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer

Error Message:
org/apache/hadoop/hdfs/server/datanode/DataNode

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/datanode/DataNode
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1385)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:825)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.setup(TestDFSHAAdminMiniCluster.java:72)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.shutdown(TestDFSHAAdminMiniCluster.java:85)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testStateTransition

Error Message:
org/apache/hadoop/hdfs/server/datanode/DataNode

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/datanode/DataNode
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1385)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:825)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.setup(TestDFSHAAdminMiniCluster.java:72)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testStateTransition

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.shutdown(TestDFSHAAdminMiniCluster.java:85)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testGetServiceState

Error Message:
org/apache/hadoop/hdfs/server/datanode/DataNode

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/datanode/DataNode
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1385)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:825)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.setup(TestDFSHAAdminMiniCluster.java:72)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testGetServiceState

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.shutdown(TestDFSHAAdminMiniCluster.java:85)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-18
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1903 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1903/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6193 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.220 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-16T13:58:05+00:00
[INFO] Final Memory: 62M/864M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7208
Updating YARN-2312
Updating HDFS-7185
Updating HDFS-5089
Updating YARN-2496
Updating MAPREDUCE-5970
Updating MAPREDUCE-5873
Updating HADOOP-11181
Updating YARN-2685
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-12
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1904 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1904/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6191 lines...]
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.161 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-17T13:59:00+00:00
[INFO] Final Memory: 56M/866M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2682
Updating HADOOP-11182
Updating YARN-2621
Updating YARN-2689
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-13
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1905 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1905/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6197 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.232 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:22 h
[INFO] Finished at: 2014-10-18T13:57:13+00:00
[INFO] Final Memory: 64M/830M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7260
Updating YARN-1879
Updating YARN-2588
Updating HDFS-7242
Updating HDFS-7252
Updating HDFS-6995
Updating MAPREDUCE-5542
Updating YARN-2705
Updating YARN-2699
Updating HADOOP-11207
Updating YARN-2676
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-10
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1906 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1906/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6187 lines...]
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.193 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-19T13:56:21+00:00
[INFO] Final Memory: 60M/866M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2504
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-12
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1907 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1907/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6210 lines...]
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.241 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-20T13:58:15+00:00
[INFO] Final Memory: 64M/850M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-5911
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
expected:<0> but was:<-3>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<-3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:737)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-11
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Hadoop-Hdfs-trunk - Build # 1908 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1908/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6242 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.227 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-21T13:56:49+00:00
[INFO] Final Memory: 63M/893M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-1879
Updating HDFS-7184
Updating YARN-2701
Updating HDFS-7266
Updating HADOOP-11194
Updating YARN-2582
Updating YARN-2673
Updating YARN-2717
Updating HDFS-7154
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testStandbyIsHot

Error Message:
Problem in starting http server. Server handlers failed

Stack Trace:
java.io.IOException: Problem in starting http server. Server handlers failed
	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:819)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:704)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:591)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:763)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:747)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1443)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1104)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:975)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:804)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testStandbyIsHot(TestStandbyIsHot.java:74)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testDatanodeRestarts

Error Message:
Problem in starting http server. Server handlers failed

Stack Trace:
java.io.IOException: Problem in starting http server. Server handlers failed
	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:819)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:704)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:591)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:763)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:747)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1443)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1104)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:975)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:804)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:465)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:424)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testDatanodeRestarts(TestStandbyIsHot.java:148)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-14
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Jenkins build is back to normal : Hadoop-Hdfs-trunk #1910

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1910/changes>


Build failed in Jenkins: Hadoop-Hdfs-trunk #1909

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1909/changes>

Changes:

[brandonli] HDFS-7259. Unresponseive NFS mount point due to deferred COMMIT response. Contributed by Brandon Li

[jlowe] YARN-90. NodeManager should identify failed disks becoming good again. Contributed by Varun Vasudev

[brandonli] HDFS-7215.Add JvmPauseMonitor to NFS gateway. Contributed by Brandon Li

[cnauroth] YARN-2720. Windows: Wildcard classpath variables not expanded against resources contained in archives. Contributed by Craig Welch.

[wang] HDFS-7221. TestDNFencingWithReplication fails consistently. Contributed by Charles Lamb.

[jitendra] Updated CHANGES.txt for HDFS-6581 merge into branch-2.6.

[aw] HDFS-7204. balancer doesn't run as a daemon (aw)

[zjshen] YARN-2709. Made timeline client getDelegationToken API retry if ConnectException happens. Contributed by Li Lu.

[vinodkv] YARN-2715. Fixed ResourceManager to respect common configurations for proxy users/groups beyond just the YARN level config. Contributed by Zhijie Shen.

[zjshen] YARN-2721. Suppress NodeExist exception thrown by ZKRMStateStore when it retries creating znode. Contributed by Jian He.

------------------------------------------
[...truncated 5989 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.293 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.477 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.257 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.065 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.593 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.39 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.615 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.513 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.581 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.868 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.344 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.986 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.243 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.3 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 377.663 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.828 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.503 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.196 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 156.931 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.713 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.114 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.721 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.216 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.398 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.775 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.56 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.543 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.587 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.964 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.536 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.468 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.209 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.488 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.772 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.712 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.043 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.298 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.067 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.391 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.226 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.239 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.498 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.226 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.143 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.478 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.426 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.65 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.11 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.14 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.428 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.535 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.127 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.053 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.112 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.995 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.15 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.922 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.845 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.732 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.069 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.766 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.772 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.445 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.088 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.003 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.504 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.7 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.909 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.804 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.171 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.017 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.459 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.006 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests run: 3139, Failures: 1, Errors: 0, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.239 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:22 h
[INFO] Finished at: 2014-10-22T13:57:19+00:00
[INFO] Final Memory: 64M/851M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7215
Updating HDFS-6581
Updating HDFS-7204
Updating YARN-2720
Updating YARN-90
Updating YARN-2721
Updating YARN-2715
Updating HDFS-7259
Updating HDFS-7221
Updating YARN-2709

Hadoop-Hdfs-trunk - Build # 1909 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1909/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6182 lines...]
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.239 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:22 h
[INFO] Finished at: 2014-10-22T13:57:19+00:00
[INFO] Final Memory: 64M/851M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7215
Updating HDFS-6581
Updating HDFS-7204
Updating YARN-2720
Updating YARN-90
Updating YARN-2721
Updating YARN-2715
Updating HDFS-7259
Updating HDFS-7221
Updating YARN-2709
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)



Build failed in Jenkins: Hadoop-Hdfs-trunk #1908

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1908/changes>

Changes:

[kasha] HADOOP-11194. Ignore .keep files (kasha)

[benoy] HDFS-7184. Allow data migration tool to run as a daemon. (Benoy Antony)

[zjshen] YARN-2673. Made timeline client put APIs retry if ConnectException happens. Contributed by Li Lu.

[zjshen] YARN-2582. Fixed Log CLI and Web UI for showing aggregated logs of LRS. Contributed Xuan Gong.

[cmccabe] HDFS-7266. HDFS Peercache enabled check should not lock on object (awang via cmccabe)

[cmccabe] HDFS-7154. Fix returning value of starting reconfiguration task (Lei Xu via Colin P. McCabe)

[jianhe] YARN-2701. Potential race condition in startLocalizer when using LinuxContainerExecutor. Contributed by Xuan Gong

[jianhe] Missing file for YARN-2701

[jianhe] Missing file for YARN-1879

[zjshen] YARN-2717. Avoided duplicate logging when container logs are not found. Contributed by Xuan Gong.

------------------------------------------
[...truncated 6049 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.058 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.117 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.507 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.624 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.647 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.554 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.884 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.278 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.097 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.097 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.05 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.226 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.93 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.611 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.59 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 161.568 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.696 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.52 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.109 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.194 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.647 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.676 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.631 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.519 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.639 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.98 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.574 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.433 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.23 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.478 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.541 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.757 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.869 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.27 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.136 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.379 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.133 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.319 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.539 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.157 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.307 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.688 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.672 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.818 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.266 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.29 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.5 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.626 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.277 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.208 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.263 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.95 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.245 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.232 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.781 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.223 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.466 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.821 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.815 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.539 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.365 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.146 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.462 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.678 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.894 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.205 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.122 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.059 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.989 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.989 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred
  TestStandbyIsHot.testStandbyIsHot:74 » IO Problem in starting http server. Ser...
  TestStandbyIsHot.testDatanodeRestarts:148 » IO Problem in starting http server...

Tests run: 3139, Failures: 1, Errors: 3, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.227 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-21T13:56:49+00:00
[INFO] Final Memory: 63M/893M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-1879
Updating HDFS-7184
Updating YARN-2701
Updating HDFS-7266
Updating HADOOP-11194
Updating YARN-2582
Updating YARN-2673
Updating YARN-2717
Updating HDFS-7154

Build failed in Jenkins: Hadoop-Hdfs-trunk #1907

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1907/changes>

Changes:

[ivanmi] MAPREDUCE-5911. Terasort TeraOutputFormat does not check for output directory existance. Contributed by Bruno P. Kinoshita.

[ivanmi] Revert "MAPREDUCE-5911. Terasort TeraOutputFormat does not check for output directory existance. Contributed by Bruno P. Kinoshita."

------------------------------------------
[...truncated 6017 lines...]
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.677 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournal
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.445 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.291 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.649 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.259 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.074 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.611 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.047 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.22 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.619 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.547 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.563 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.149 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.311 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.048 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.021 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.194 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.465 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.797 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.509 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.774 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 161.22 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.69 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.076 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.081 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.745 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.044 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.712 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.581 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.541 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.518 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.013 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.513 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.477 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.206 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.577 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.666 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.693 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.054 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.319 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.07 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.328 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.139 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.192 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.587 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.217 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.259 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.413 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.385 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.64 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.078 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.171 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.485 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.553 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.256 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.099 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.117 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.857 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.199 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.903 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.836 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.282 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.113 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.82 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.795 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.42 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.208 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.168 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.454 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.124 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.882 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.219 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.13 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.041 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.959 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.98 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestBalancer.testUnknownDatanode:737 expected:<0> but was:<-3>
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3139, Failures: 2, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.241 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-20T13:58:15+00:00
[INFO] Final Memory: 64M/850M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-5911

Build failed in Jenkins: Hadoop-Hdfs-trunk #1906

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1906/changes>

Changes:

[vinodkv] YARN-2504. Enhanced RM Admin CLI to support management of node-labels. Contribyted by Wangda Tan.

------------------------------------------
[...truncated 5994 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.792 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.667 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournal
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.453 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.301 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.621 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.069 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.728 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.868 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.61 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.501 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.625 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.137 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.325 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.014 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.045 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.592 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.561 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.925 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.523 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.6 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 164.314 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.678 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.527 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.666 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.284 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.45 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.701 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.575 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.558 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.5 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.534 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.427 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.207 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.513 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.279 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.703 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.195 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.291 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.143 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.325 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.14 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.235 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.615 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.17 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.262 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.565 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.278 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.607 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.114 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.193 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.469 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.568 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.147 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.163 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.108 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.833 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.182 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.847 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.788 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.297 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.457 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.932 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.804 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.351 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.205 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.128 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.443 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.786 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.922 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.963 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.125 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.069 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.88 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.074 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3139, Failures: 1, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.193 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-19T13:56:21+00:00
[INFO] Final Memory: 60M/866M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2504

Build failed in Jenkins: Hadoop-Hdfs-trunk #1905

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1905/changes>

Changes:

[vinayakumarb] HDFS-6995. Block should be placed in the client's 'rack-local' node if 'client-local' node is not available (vinayakumarb)

[vinayakumarb] HDFS-7242. Code improvement for FSN#checkUnreadableBySuperuser. (Contributed by Yi Liu)

[vinayakumarb] HDFS-7252. small refinement to the use of isInAnEZ in FSNamesystem. (Yi Liu via vinayakumarb)

[vinodkv] YARN-2699. Fixed a bug in CommonNodeLabelsManager that caused tests to fail when using ephemeral ports on NodeIDs. Contributed by Wangda Tan.

[jianhe] YARN-2588. Standby RM fails to transitionToActive if previous transitionToActive failed with ZK exception. Contributed by Rohith Sharmaks

[jlowe] MAPREDUCE-5542. Killing a job just as it finishes can generate an NPE in client. Contributed by Rohith

[vinodkv] HADOOP-11207. Enhanced common DelegationTokenAuthenticationHandler to support proxy-users on Delegation-token management operations. Contributed by Zhijie Shen.

[jianhe] YARN-1879. Marked Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over. Contributed by Tsuyoshi OZAWA

[vinodkv] YARN-2705. Fixed bugs in ResourceManager node-label manager that were causing test-failures: added a dummy in-memory labels-manager. Contributed by Wangda Tan.

[szetszwo] HDFS-7260. Change DFSOutputStream.MAX_PACKETS to be configurable.

[vinodkv] YARN-2676. Enhanced Timeline auth-filter to support proxy users. Contributed by Zhijie Shen.

------------------------------------------
[...truncated 6004 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.07 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.021 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.396 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.617 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.504 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.581 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.819 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.375 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.074 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.127 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.283 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.494 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.824 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.559 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.661 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.309 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.666 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.112 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.109 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.857 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.583 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.757 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.579 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.536 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.483 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.567 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.454 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.213 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.531 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.385 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.737 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.159 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.338 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.123 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.428 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.117 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.155 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.6 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.129 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.155 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.52 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.27 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.587 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.056 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.19 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.461 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.547 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.3 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.045 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.135 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.822 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.169 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.947 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.845 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.333 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.038 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.795 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.773 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.024 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.712 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.149 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.495 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.127 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.901 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.31 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.143 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.062 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.001 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.099 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3139, Failures: 1, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.232 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:22 h
[INFO] Finished at: 2014-10-18T13:57:13+00:00
[INFO] Final Memory: 64M/830M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7260
Updating YARN-1879
Updating YARN-2588
Updating HDFS-7242
Updating HDFS-7252
Updating HDFS-6995
Updating MAPREDUCE-5542
Updating YARN-2705
Updating YARN-2699
Updating HADOOP-11207
Updating YARN-2676

Build failed in Jenkins: Hadoop-Hdfs-trunk #1904

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1904/changes>

Changes:

[stevel] YARN-2689 TestSecureRMRegistryOperations failing on windows: secure ZK won't start

[jianhe] YARN-2621. Simplify the output when the user doesn't have the access for getDomain(s). Contributed by Zhijie Shen

[raviprak] HADOOP-11182. GraphiteSink emits wrong timestamps (Sascha Coenen via raviprak) Fix CHANGES.txt

[jianhe] YARN-2682. Updated WindowsSecureContainerExecutor to not use DefaultContainerExecutor#getFirstApplicationDir and use getWorkingDir() instead. Contributed by Zhihai Xu

[jianhe] Moved YARN-2682 from 2.6 to 2.7 in CHANGES.txt

------------------------------------------
[...truncated 5998 lines...]
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.46 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.296 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.766 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.254 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.055 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.6 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.679 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.631 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.505 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.592 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.339 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.363 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.224 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.047 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.919 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.631 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.8 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.531 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.757 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 164.566 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.699 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.137 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.124 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.52 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.574 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.743 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.586 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.617 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.546 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.049 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.562 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.473 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.217 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.547 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.742 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.696 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.116 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.311 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.077 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.5 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.12 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.282 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.532 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.161 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.232 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.464 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.386 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.589 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.119 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.062 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.446 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.561 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.2 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.071 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.185 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.817 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.182 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.967 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.834 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.264 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.448 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.807 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.812 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.464 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.22 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.063 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.505 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.712 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.889 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.817 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.172 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.029 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.042 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.976 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3137, Failures: 1, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.161 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-17T13:59:00+00:00
[INFO] Final Memory: 56M/866M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2682
Updating HADOOP-11182
Updating YARN-2621
Updating YARN-2689

Build failed in Jenkins: Hadoop-Hdfs-trunk #1903

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1903/changes>

Changes:

[jlowe] MAPREDUCE-5873. Shuffle bandwidth computation includes time spent waiting for maps. Contributed by Siqi Li

[jing9] HDFS-7185. The active NameNode will not accept an fsimage sent from the standby during rolling upgrade. Contributed by Jing Zhao.

[jlowe] MAPREDUCE-5970. Provide a boolean switch to enable MR-AM profiling. Contributed by Gera Shegalov

[jianhe] YARN-2312. Deprecated old ContainerId#getId API and updated MapReduce to use ContainerId#getContainerId instead. Contributed by Tsuyoshi OZAWA

[raviprak] HADOOP-11181. GraphiteSink emits wrong timestamps (Sascha Coenen via raviprak)

[vinodkv] YARN-2496. Enhanced Capacity Scheduler to have basic support for allocating resources based on node-labels. Contributed by Wangda Tan.

[vinodkv] YARN-2685. Fixed a bug in CommonNodeLabelsManager that caused wrong resource tracking per label when a host runs multiple node-managers. Contributed by Wangda Tan.

[szetszwo] HDFS-7208. NN doesn't schedule replication when a DN storage fails.  Contributed by Ming Ma

[szetszwo] HDFS-5089. When a LayoutVersion support SNAPSHOT, it must support FSIMAGE_NAME_OPTIMIZATION.

------------------------------------------
[...truncated 6000 lines...]
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.632 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.081 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.952 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.116 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.652 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.524 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.615 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.036 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.41 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.957 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.028 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.133 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.645 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.812 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.52 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.648 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.379 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.701 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.121 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.017 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.479 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.539 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.876 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.674 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.528 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.346 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.027 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.53 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.461 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.202 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.467 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.338 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.883 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.155 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.279 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.188 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.44 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.381 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.259 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.609 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.306 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.209 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.447 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.398 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.555 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.208 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.169 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.445 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.585 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.297 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.173 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.214 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.919 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.516 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.989 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.82 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.684 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.592 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.817 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.875 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.546 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.196 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.175 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.434 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.144 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.938 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.268 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.13 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.077 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.905 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.066 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3137, Failures: 1, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.220 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:23 h
[INFO] Finished at: 2014-10-16T13:58:05+00:00
[INFO] Final Memory: 62M/864M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7208
Updating YARN-2312
Updating HDFS-7185
Updating HDFS-5089
Updating YARN-2496
Updating MAPREDUCE-5970
Updating MAPREDUCE-5873
Updating HADOOP-11181
Updating YARN-2685

Build failed in Jenkins: Hadoop-Hdfs-trunk #1902

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1902/changes>

Changes:

[jing9] HDFS-7228. Add an SSD policy into the default BlockStoragePolicySuite. Contributed by Jing Zhao.

[zjshen] HADOOP-11181. Generalized o.a.h.s.t.d.DelegationTokenManager to handle all sub-classes of AbstractDelegationTokenIdentifier. Contributed by Zhijie Shen.

[wheat9] HDFS-7201. Fix typos in hdfs-default.xml. Contributed by Dawson Choong.

[zjshen] YARN-2656. Made RM web services authentication filter support proxy user. Contributed by Varun Vasudev and Zhijie Shen.

[wheat9] HDFS-7190. Bad use of Preconditions in startFileInternal(). Contributed by Dawson Choong.

------------------------------------------
[...truncated 6098 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.909 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.644 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.445 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.637 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.264 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.354 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.01 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.052 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.501 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.535 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.853 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.52 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.422 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 153.214 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.688 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.12 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.101 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.7 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.689 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.71 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.585 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.553 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.526 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.09 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.573 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.474 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.206 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.518 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.172 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.76 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.122 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.335 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.188 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.497 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.144 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.206 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.584 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.259 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.384 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.524 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.436 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.615 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.231 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.222 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.468 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.633 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.237 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.111 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.191 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.868 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.211 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.853 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.847 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.294 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.085 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.817 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.793 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.977 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.113 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.162 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.159 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.485 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.744 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.942 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.669 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.128 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.009 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.935 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.063 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred
  TestDFSHAAdminMiniCluster.setup:72 » NoClassDefFound org/apache/hadoop/conf/Re...
  TestDFSHAAdminMiniCluster.shutdown:85 NullPointer
  TestDFSHAAdminMiniCluster.setup:72 » NoClassDefFound org/apache/hadoop/hdfs/se...
  TestDFSHAAdminMiniCluster.shutdown:85 NullPointer
  TestDFSHAAdminMiniCluster.setup:72 » NoClassDefFound org/apache/hadoop/hdfs/se...
  TestDFSHAAdminMiniCluster.shutdown:85 NullPointer
  TestDFSHAAdminMiniCluster.setup:72 » NoClassDefFound org/apache/hadoop/hdfs/se...
  TestDFSHAAdminMiniCluster.shutdown:85 NullPointer
  TestDFSHAAdminMiniCluster.setup:72 » NoClassDefFound org/apache/hadoop/hdfs/se...
  TestDFSHAAdminMiniCluster.shutdown:85 NullPointer
  TestDFSHAAdminMiniCluster.setup:72 » NoClassDefFound org/apache/hadoop/hdfs/se...
  TestDFSHAAdminMiniCluster.shutdown:85 NullPointer

Tests run: 3141, Failures: 1, Errors: 13, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.222 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-15T13:56:20+00:00
[INFO] Final Memory: 57M/836M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2656
Updating HDFS-7228
Updating HDFS-7201
Updating HADOOP-11181
Updating HDFS-7190

Build failed in Jenkins: Hadoop-Hdfs-trunk #1901

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1901/changes>

Changes:

[jlowe] MAPREDUCE-6125. TestContainerLauncherImpl sometimes fails. Contributed by Mit Desai

[jlowe] YARN-2667. Fix the release audit warning caused by hadoop-yarn-registry. Contributed by Yi Liu

[jlowe] MAPREDUCE-6115. TestPipeApplication#testSubmitter fails in trunk. Contributed by Binglin Chang

[jing9] HDFS-7236. Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots. Contributed by Yongjun Zhang.

[wheat9] HDFS-6544. Broken Link for GFS in package.html. Contributed by Suraj Nayak M.

[zjshen] YARN-2651. Spun off LogRollingInterval from LogAggregationContext. Contributed by Xuan Gong.

[cnauroth] HDFS-7090. Use unbuffered writes when persisting in-memory replicas. Contributed by Xiaoyu Yao.

[jlowe] YARN-2377. Localization exception stack traces are not passed as diagnostic info. Contributed by Gera Shegalov

[jianhe] YARN-2308. Changed CapacityScheduler to explicitly throw exception if the queue

[jianhe] Missing Changes.txt for YARN-2308

[kasha] YARN-2641. Decommission nodes on -refreshNodes instead of next NM-RM heartbeat. (Zhihai Xu via kasha)

[kasha] YARN-2566. DefaultContainerExecutor should pick a working directory randomly. (Zhihai Xu via kasha)

[atm] HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user. Contributed by Arun Suresh.

[szetszwo] HDFS-7237. The command "hdfs namenode -rollingUpgrade" throws ArrayIndexOutOfBoundsException.

[wheat9] HADOOP-11198. Fix typo in javadoc for FileSystem#listStatus(). Contributed by Li Lu.

------------------------------------------
[...truncated 6005 lines...]
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.856 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.103 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.617 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.521 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.59 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.184 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.413 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.195 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.046 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.491 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.486 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.872 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.529 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.545 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.356 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.69 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.092 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.094 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.708 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.465 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.72 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.689 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.684 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.469 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.995 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.426 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.536 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.27 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.531 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.095 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.911 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.055 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.21 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.039 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.393 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.17 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.207 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.54 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.255 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.259 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.424 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.285 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.298 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.123 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.303 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.453 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.632 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.177 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.218 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.416 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.955 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.601 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.641 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.014 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.291 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.982 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.863 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.036 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.274 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.853 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.263 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.436 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.47 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.685 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.513 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.157 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.093 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.829 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.112 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3135, Failures: 1, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.266 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-14T13:56:47+00:00
[INFO] Final Memory: 66M/851M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2667
Updating YARN-2641
Updating YARN-2651
Updating HDFS-7090
Updating MAPREDUCE-6115
Updating YARN-2308
Updating MAPREDUCE-6125
Updating HDFS-6544
Updating HDFS-7236
Updating YARN-2377
Updating HDFS-7237
Updating YARN-2566
Updating HADOOP-11198
Updating HADOOP-11176

Build failed in Jenkins: Hadoop-Hdfs-trunk #1900

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1900/>

------------------------------------------
[...truncated 6004 lines...]
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.735 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.682 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournal
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.453 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.285 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.495 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.257 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.07 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.06 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.602 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.644 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.499 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.578 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.986 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.377 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.022 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.009 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.435 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.501 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.996 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.516 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.529 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.382 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.686 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.055 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.639 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.242 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.483 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.734 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.591 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.527 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.626 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.969 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.485 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.452 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.208 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.445 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.974 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.674 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.128 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.276 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.106 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.351 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.207 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.285 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.578 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.152 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.177 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.57 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.274 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.629 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.207 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.133 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.431 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.576 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.284 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.066 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.209 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.888 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.184 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.792 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.801 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.286 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.482 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.839 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.829 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.418 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.118 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.08 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.532 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.628 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.896 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.831 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.134 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.087 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.035 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.007 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3134, Failures: 1, Errors: 1, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.167 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-13T13:56:36+00:00
[INFO] Final Memory: 64M/908M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

Hadoop-Hdfs-trunk - Build # 1900 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1900/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6197 lines...]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.167 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-13T13:56:36+00:00
[INFO] Final Memory: 64M/908M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend

Error Message:
expected:<18> but was:<12>

Stack Trace:
java.lang.AssertionError: expected:<18> but was:<12>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend(TestDNFencing.java:448)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress

Error Message:
Deferred

Stack Trace:
java.lang.RuntimeException: Deferred
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-19
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
	at org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
	at org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)



Build failed in Jenkins: Hadoop-Hdfs-trunk #1899

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1899/changes>

Changes:

[stevel] YARN-2668 yarn-registry JAR won't link against ZK 3.4.5. (stevel)

[kasha] MAPREDUCE-5875. Make Counter limits consistent across JobClient, MRAppMaster, and YarnChild. (Gera Shegalov via kasha)

------------------------------------------
[...truncated 6011 lines...]
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.82 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.681 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournal
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.456 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.288 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.635 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.257 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.066 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.548 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.689 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.616 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.544 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.609 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.716 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.332 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.233 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.067 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.182 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.616 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.876 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.503 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.768 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.318 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.69 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.102 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.597 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.293 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.572 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.682 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.56 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.584 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.461 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.979 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.44 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.456 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.208 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.468 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.208 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.682 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.104 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.295 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.135 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.361 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.212 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.226 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.584 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.161 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.223 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.398 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.281 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.623 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.188 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.154 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.421 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.569 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.203 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.095 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.148 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.805 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.188 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.845 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.827 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.275 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.201 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.73 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.883 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.941 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.214 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.502 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.445 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.229 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.903 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.243 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.135 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.055 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.031 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.976 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots:162->doTestMultipleSnapshots:184 » IO
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3133, Failures: 1, Errors: 2, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:37 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.215 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:37 h
[INFO] Finished at: 2014-10-12T14:12:01+00:00
[INFO] Final Memory: 66M/865M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2668
Updating MAPREDUCE-5875

Build failed in Jenkins: Hadoop-Hdfs-trunk #1898

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/changes>

Changes:

[vinodkv] YARN-2494. Added NodeLabels Manager internal API and implementation. Contributed by Wangda Tan.

[cmccabe] HDFS-7198. Fix "unchecked conversion" warning in DFSClient#getPathTraceScope (cmccabe)

[wang] HDFS-7209. Populate EDEK cache when creating encryption zone. (Yi Liu via wang)

[cnauroth] HADOOP-11188. hadoop-azure: automatically expand page blobs when they become full. Contributed by Eric Hanson.

[wheat9] Update CHANGES.txt after merging HDFS-6252 and HDFS-6657 to branch-2.

[vinodkv] YARN-2501. Enhanced AMRMClient library to support requests against node labels. Contributed by Wangda Tan.

[wheat9] HADOOP-11193. Fix uninitialized variables in NativeIO.c. Contributed by Xiaoyu Yao.

------------------------------------------
[...truncated 6009 lines...]
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.514 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.076 sec - in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.481 sec - in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 148.44 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.625 sec - in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.537 sec - in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.583 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.417 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.385 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.293 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.177 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.946 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 377.822 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.86 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.53 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.284 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 153.389 sec - in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.692 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.103 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.727 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.931 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.44 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.722 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.559 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.702 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.709 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.977 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.493 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.463 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.209 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.427 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.953 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.684 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.054 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.341 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.137 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.441 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.141 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.129 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.643 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.326 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.238 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.519 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.334 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.763 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.116 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.172 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.436 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.543 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.233 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.077 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.124 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.856 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.143 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.198 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.8 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.224 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.195 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.793 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.81 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.104 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.722 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.081 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.459 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.83 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 4.902 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.188 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.123 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.004 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 13.935 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.05 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir

Results :

Failed tests: 
  TestDNFencing.testQueueingWithAppend:448 expected:<18> but was:<12>

Tests in error: 
  TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots:162->doTestMultipleSnapshots:184 » IO
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3134, Failures: 1, Errors: 2, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:22 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.191 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:22 h
[INFO] Finished at: 2014-10-11T13:57:38+00:00
[INFO] Final Memory: 66M/877M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-6252
Updating HADOOP-11193
Updating YARN-2501
Updating HDFS-7198
Updating YARN-2494
Updating HADOOP-11188
Updating HDFS-7209
Updating HDFS-6657

Build failed in Jenkins: Hadoop-Hdfs-trunk #1897

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/changes>

Changes:

[umamahesh] HADOOP-11133. Should trim the content of keystore password file for JavaKeyStoreProvider (Yi Liu via umamahesh)

[kihwal] HDFS-7217. Better batching of IBRs. Contributed by Kihwal Lee.

[cnauroth] HADOOP-11175. Fix several issues of hadoop security configuration in user doc. Contributed by Yi Liu.

[vinodkv] YARN-2493. Added user-APIs for using node-labels. Contributed by Wangda Tan.

[cnauroth] HDFS-7195. Update user doc of secure mode about Datanodes don't require root or jsvc. Contributed by Chris Nauroth.

[wang] HADOOP-11178. Fix findbugs exclude file. (Arun Suresh via wang)

[zjshen] YARN-2629. Made the distributed shell use the domain-based timeline ACLs. Contributed by Zhijie Shen.

[wang] HADOOP-11174. Delegation token for KMS should only be got once if it already exists. (Yi Liu via wang)

[vinodkv] YARN-2544. Added admin-API objects for using node-labels. Contributed by Wangda Tan.

[cmccabe] HADOOP-11184. Update Hadoop's lz4 to r123 (cmccabe)

[kasha] YARN-2180. [YARN-1492] In-memory backing store for cache manager. (Chris Trezzo via kasha)

[zjshen] YARN-2617. Fixed ApplicationSubmissionContext to still set resource for backward compatibility. Contributed by Wangda Tan.

[atm] HDFS-7026. Introduce a string constant for "Failed to obtain user group info...". Contributed by Yongjun Zhang.

[zjshen] YARN-2671. Fix the Jira number in the change log.

[cnauroth] MAPREDUCE-6122. TestLineRecordReader may fail due to test data files checked out of git with incorrect line endings. Contributed by Chris Nauroth.

[cnauroth] MAPREDUCE-6122. Fix CHANGES.txt.

[cnauroth] MAPREDUCE-6123. TestCombineFileInputFormat incorrectly starts 2 MiniDFSCluster instances. Contributed by Chris Nauroth.

[cnauroth] YARN-2662. TestCgroupsLCEResourcesHandler leaks file descriptors. Contributed by Chris Nauroth.

[zjshen] YARN-2583. Modified AggregatedLogDeletionService to be able to delete rolling aggregated logs. Contributed by Xuan Gong.

------------------------------------------
[...truncated 9597 lines...]
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"1525518005@qtp-770900021-1 - Acceptor0 SelectChannelConnector@localhost:55443" daemon prio=5 tid=58 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:171)
"Thread-106" daemon prio=5 tid=133 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 2 on 48798" daemon prio=5 tid=38 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 1 on 49134" daemon prio=5 tid=93 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 38252" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 7 on 48798" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 6 on 48798" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"Thread-104" daemon prio=5 tid=135 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.waitForInit(DataBlockScanner.java:115)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNextBPScanner(DataBlockScanner.java:135)
        at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:88)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 1 on 38252" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 0 on 34034" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1294aa42" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:5139)
        at java.lang.Thread.run(Thread.java:662)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@9bed3d1" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 38252" daemon prio=5 tid=69 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"IPC Server handler 5 on 49134" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-890430250-67.195.81.153-1412949040889"> daemon prio=5 tid=163 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"refreshUsed-<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5/current/BP-890430250-67.195.81.153-1412949040889"> daemon prio=5 tid=164 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)
"IPC Server handler 3 on 34034" daemon prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)
"refreshUsed-<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-890430250-67.195.81.153-1412949040889"> daemon prio=5 tid=166 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
        at java.lang.Thread.run(Thread.java:662)



Tests in error: 
  TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots:162->doTestMultipleSnapshots:184 » IO
  TestDeadDatanode.testDeadDatanode:98 » NoClassDefFound org/apache/hadoop/http/...
  TestDeadDatanode.cleanup:60 NullPointer
  TestDNFencingWithReplication.testFencingStress:135 » Runtime Deferred

Tests run: 3134, Failures: 3, Errors: 4, Skipped: 19

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:21 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  2.190 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:21 h
[INFO] Finished at: 2014-10-10T13:56:10+00:00
[INFO] Final Memory: 63M/851M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-1492
Updating MAPREDUCE-6123
Updating HADOOP-11133
Updating MAPREDUCE-6122
Updating YARN-2662
Updating YARN-2493
Updating YARN-2180
Updating YARN-2544
Updating HDFS-7195
Updating HADOOP-11174
Updating HADOOP-11175
Updating HADOOP-11184
Updating YARN-2629
Updating YARN-2671
Updating YARN-2617
Updating HDFS-7217
Updating HDFS-7026
Updating YARN-2583
Updating HADOOP-11178