You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/04/28 23:39:03 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #3077

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3077/changes>

Changes:

[aw] HADOOP-13067. cleanup the dockerfile

------------------------------------------
[...truncated 5183 lines...]
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.219 sec - in org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.585 sec - in org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.727 sec - in org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.209 sec - in org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.122 sec - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.345 sec - in org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.164 sec - in org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.595 sec - in org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.191 sec - in org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.271 sec - in org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.386 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.241 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.887 sec - in org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.691 sec - in org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.403 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.861 sec - in org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.993 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.799 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.739 sec - in org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.683 sec - in org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 141.107 sec - in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.796 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.637 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.325 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.389 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.961 sec - in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.407 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.135 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.57 sec - in org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.693 sec - in org.apache.hadoop.hdfs.TestDisableConnCache
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.838 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.146 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.698 sec - in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.055 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.818 sec - in org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.905 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.781 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.775 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.072 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.378 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.66 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.12 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.575 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.454 sec - in org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.292 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.TestAclsEndToEnd
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.105 sec - in org.apache.hadoop.hdfs.TestAclsEndToEnd
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.097 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.597 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.709 sec - in org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.513 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.618 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.649 sec - in org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.644 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.87 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.636 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.89 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.133 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.227 sec - in org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.909 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.127 sec - in org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.928 sec - in org.apache.hadoop.hdfs.TestBlockMissingException
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.531 sec - in org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.827 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.154 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.259 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.683 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.371 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.408 sec - in org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.709 sec - in org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.258 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.445 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.665 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.667 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.734 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.982 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead

Results :

Tests in error: 
  TestLossyRetryInvocationHandler.testStartNNWithTrashEmptier:45 » NoClassDefFound
  TestDNFencingWithReplication.testFencingStress:143 » NoClassDefFound org/apach...

Tests run: 4389, Failures: 0, Errors: 2, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:05 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  03:47 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.129 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:51 h
[INFO] Finished at: 2016-04-28T21:38:54+00:00
[INFO] Final Memory: 68M/514M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

Hadoop-Hdfs-trunk - Build # 3078 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3078/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5431 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:01 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [54:12 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.099 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 58:15 min
[INFO] Finished at: 2016-04-28T22:40:12+00:00
[INFO] Final Memory: 59M/892M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:657)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf901.gq1.ygridcore.net/67.195.81.145"; destination host is: "localhost":55608; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf901.gq1.ygridcore.net/67.195.81.145"; destination host is: "localhost":55608; 
	at java.net.SocketInputStream.read(SocketInputStream.java:196)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.security.krb5.internal.TCPClient.readFully(NetClient.java:132)
	at sun.security.krb5.internal.TCPClient.receive(NetClient.java:84)
	at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:390)
	at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:343)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.KdcComm.send(KdcComm.java:327)
	at sun.security.krb5.KdcComm.send(KdcComm.java:219)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:415)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:797)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:792)
	at org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:415)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1505)
	at org.apache.hadoop.ipc.Client.call(Client.java:1375)
	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy27.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:206)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:95)



Hadoop-Hdfs-trunk - Build # 3079 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3079/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6640 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:00 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [55:03 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.106 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 59:05 min
[INFO] Finished at: 2016-04-29T05:46:52+00:00
[INFO] Final Memory: 68M/871M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestEncryptedTransfer.testLongLivedWriteClientAfterRestart[0]

Error Message:
Port in use: localhost:50828

Stack Trace:
java.net.BindException: Port in use: localhost:50828
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
	at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:915)
	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:857)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:158)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2020)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
	at org.apache.hadoop.hdfs.TestEncryptedTransfer.testLongLivedWriteClientAfterRestart(TestEncryptedTransfer.java:417)


FAILED:  org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent

Error Message:
Problem binding to [localhost:50673] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:50673] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.apache.hadoop.ipc.Server.bind(Server.java:530)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:793)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:2592)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:958)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:563)
	at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
	at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:426)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent(TestFileCreationDelete.java:75)


FAILED:  org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart

Error Message:
test timed out after 300000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 300000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:768)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:741)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:962)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:977)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:1087)
	at org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:264)


FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:657)




Hadoop-Hdfs-trunk - Build # 3082 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3082/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5496 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:48 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:27 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.150 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:32 h
[INFO] Finished at: 2016-04-29T17:21:11+00:00
[INFO] Final Memory: 60M/876M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.shutdown(MiniQJMHACluster.java:168)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpoint(TestRollingUpgrade.java:604)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN(TestRollingUpgrade.java:568)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands

Error Message:
expected null, but was:<javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo$Bean,items=((itemName=blockPoolId,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=createdRollbackImages,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=finalizeTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=startTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={blockPoolId=BP-1681401862-67.195.81.153-1461949367675, createdRollbackImages=true, finalizeTime=0, startTime=1461949370752})>

Stack Trace:
java.lang.AssertionError: expected null, but was:<javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo$Bean,items=((itemName=blockPoolId,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=createdRollbackImages,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=finalizeTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=startTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={blockPoolId=BP-1681401862-67.195.81.153-1461949367675, createdRollbackImages=true, finalizeTime=0, startTime=1461949370752})>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotNull(Assert.java:664)
	at org.junit.Assert.assertNull(Assert.java:646)
	at org.junit.Assert.assertNull(Assert.java:656)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.checkMxBeanIsNull(TestRollingUpgrade.java:295)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands(TestRollingUpgrade.java:102)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback

Error Message:
expected null, but was:<javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo$Bean,items=((itemName=blockPoolId,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=createdRollbackImages,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=finalizeTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=startTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={blockPoolId=BP-1681401862-67.195.81.153-1461949367675, createdRollbackImages=true, finalizeTime=0, startTime=1461949370752})>

Stack Trace:
java.lang.AssertionError: expected null, but was:<javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo$Bean,items=((itemName=blockPoolId,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=createdRollbackImages,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=finalizeTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=startTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={blockPoolId=BP-1681401862-67.195.81.153-1461949367675, createdRollbackImages=true, finalizeTime=0, startTime=1461949370752})>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotNull(Assert.java:664)
	at org.junit.Assert.assertNull(Assert.java:646)
	at org.junit.Assert.assertNull(Assert.java:656)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.checkMxBeanIsNull(TestRollingUpgrade.java:295)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback(TestRollingUpgrade.java:324)


FAILED:  org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.testReconstructForNotEnoughRacks

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:153)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
Throttle is too permissive

Stack Trace:
java.lang.AssertionError: Throttle is too permissive
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:701)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionDeadDN

Error Message:
Problem binding to [localhost:51282] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:51282] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.apache.hadoop.ipc.Server.bind(Server.java:530)
	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:793)
	at org.apache.hadoop.ipc.Server.<init>(Server.java:2592)
	at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:958)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:563)
	at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
	at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:924)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1297)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:479)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2519)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2242)
	at org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionDeadDN(TestDecommissioningStatus.java:414)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes(TestDFSUpgradeWithHA.java:687)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":34427; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":34427; 
	at java.net.SocketInputStream.read(SocketInputStream.java:196)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.security.krb5.internal.TCPClient.readFully(NetClient.java:132)
	at sun.security.krb5.internal.TCPClient.receive(NetClient.java:84)
	at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:390)
	at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:343)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.KdcComm.send(KdcComm.java:327)
	at sun.security.krb5.KdcComm.send(KdcComm.java:219)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:415)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:797)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:792)
	at org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:415)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1505)
	at org.apache.hadoop.ipc.Client.call(Client.java:1375)
	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy26.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:206)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:95)




Hadoop-Hdfs-trunk - Build # 3083 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3083/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5453 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:01 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:29 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.103 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:34 h
[INFO] Finished at: 2016-04-29T18:58:52+00:00
[INFO] Final Memory: 58M/792M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion

Error Message:
expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<1>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount

Error Message:
expected:<20.0> but was:<21.0>

Stack Trace:
java.lang.AssertionError: expected:<20.0> but was:<21.0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:494)
	at org.junit.Assert.assertEquals(Assert.java:592)
	at org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.checkClusterHealth(TestNamenodeCapacityReport.java:320)
	at org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:280)


FAILED:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithRename

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1994)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithRename(TestOpenFilesWithSnapshot.java:210)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback(TestRollingUpgrade.java:351)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":50952; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":50952; 
	at java.net.SocketInputStream.read(SocketInputStream.java:196)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.security.krb5.internal.TCPClient.readFully(NetClient.java:132)
	at sun.security.krb5.internal.TCPClient.receive(NetClient.java:84)
	at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:390)
	at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:343)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.KdcComm.send(KdcComm.java:327)
	at sun.security.krb5.KdcComm.send(KdcComm.java:219)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:415)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:797)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:792)
	at org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:415)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1505)
	at org.apache.hadoop.ipc.Client.call(Client.java:1375)
	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy26.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:192)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:95)




Hadoop-Hdfs-trunk - Build # 3084 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3084/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6725 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:55 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:29 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.106 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:35 h
[INFO] Finished at: 2016-04-29T20:52:22+00:00
[INFO] Final Memory: 58M/819M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
39 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:45690,DS-d16f1619-f67e-492e-8192-9b685bcfff67,DISK], DatanodeInfoWithStorage[127.0.0.1:36326,DS-95cd7b4b-cf73-43fe-82a4-901ab80d52ed,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:45690,DS-d16f1619-f67e-492e-8192-9b685bcfff67,DISK], DatanodeInfoWithStorage[127.0.0.1:36326,DS-95cd7b4b-cf73-43fe-82a4-901ab80d52ed,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:45690,DS-d16f1619-f67e-492e-8192-9b685bcfff67,DISK], DatanodeInfoWithStorage[127.0.0.1:36326,DS-95cd7b4b-cf73-43fe-82a4-901ab80d52ed,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:45690,DS-d16f1619-f67e-492e-8192-9b685bcfff67,DISK], DatanodeInfoWithStorage[127.0.0.1:36326,DS-95cd7b4b-cf73-43fe-82a4-901ab80d52ed,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1236)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1427)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1342)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:603)


FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:657)


FAILED:  org.apache.hadoop.hdfs.TestMissingBlocksAlert.testMissingBlocksAlert

Error Message:
expected:<2> but was:<4>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<4>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.TestMissingBlocksAlert.testMissingBlocksAlert(TestMissingBlocksAlert.java:119)


FAILED:  org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.testStripedFile0

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertFalse(Assert.java:64)
	at org.junit.Assert.assertFalse(Assert.java:74)
	at org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.doTest(TestSafeModeWithStripedFile.java:156)
	at org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.testStripedFile0(TestSafeModeWithStripedFile.java:76)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
Expected to find 'DatanodeInfoWithStorage[127.0.0.1:56359,DS-e7dc216b-bed0-42ad-9476-78920d53c6a9,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:56359,DS-9433b7b6-78c3-4d63-9e17-9710de9b21ba,DISK]] are bad. Aborting...
 at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
 at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
 at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
 at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)


Stack Trace:
java.lang.AssertionError: Expected to find 'DatanodeInfoWithStorage[127.0.0.1:56359,DS-e7dc216b-bed0-42ad-9476-78920d53c6a9,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:56359,DS-9433b7b6-78c3-4d63-9e17-9710de9b21ba,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)

	at org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:248)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:686)
Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:56359,DS-9433b7b6-78c3-4d63-9e17-9710de9b21ba,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.testBlockScheduledUpdate

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:854)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.setup(TestAddStripedBlocks.java:82)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.testAddUCReplica

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:854)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.setup(TestAddStripedBlocks.java:82)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.testAllocateBlockId

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:854)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.setup(TestAddStripedBlocks.java:82)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.testAddStripedBlock

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:854)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.setup(TestAddStripedBlocks.java:82)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.testCheckStripedReplicaCorrupt

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:854)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.setup(TestAddStripedBlocks.java:82)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.testGetLocatedStripedBlocks

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:854)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks.setup(TestAddStripedBlocks.java:82)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus

Error Message:
Unexpected num under-replicated blocks expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: Unexpected num under-replicated blocks expected:<4> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
	at org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:302)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect txid expected:<1> but was:<2>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testStoragePoliciesCK

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testStoragePoliciesCK(TestFsck.java:1667)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testToCheckTheFsckCommandOnIllegalArguments

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testToCheckTheFsckCommandOnIllegalArguments(TestFsck.java:1116)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testUnderMinReplicatedBlock

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testUnderMinReplicatedBlock(TestFsck.java:860)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMoveAndDelete

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMoveAndDelete(TestFsck.java:563)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckSymlink

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckSymlink(TestFsck.java:1366)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsck

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsck(TestFsck.java:182)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckForSnapshotFiles

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckForSnapshotFiles(TestFsck.java:1388)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCKDecommission

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCKDecommission(TestFsck.java:1510)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testCorruptBlock

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testCorruptBlock(TestFsck.java:783)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMove

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMove(TestFsck.java:352)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMoveAfterCorruption

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMoveAfterCorruption(TestFsck.java:1944)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckListCorruptFilesBlocks

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckListCorruptFilesBlocks(TestFsck.java:1048)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCKCorruption

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCKCorruption(TestFsck.java:1605)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckError

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckError(TestFsck.java:1017)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckNonExistent

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckNonExistent(TestFsck.java:272)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckWithDecommissionedReplicas

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckWithDecommissionedReplicas(TestFsck.java:1725)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckReplicaDetails

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckReplicaDetails(TestFsck.java:939)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testECFsck

Error Message:
org/apache/hadoop/io/erasurecode/CodecUtil

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/io/erasurecode/CodecUtil
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.<init>(DFSStripedOutputStream.java:289)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:283)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1183)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1125)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:415)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:412)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:412)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:355)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:829)
	at org.apache.hadoop.hdfs.DFSTestUtil.createStripedFile(DFSTestUtil.java:1930)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testECFsck(TestFsck.java:1799)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckOpenECFiles

Error Message:
org/apache/hadoop/io/erasurecode/CodecUtil

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/io/erasurecode/CodecUtil
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.<init>(DFSStripedOutputStream.java:289)
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:283)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1183)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1125)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:415)
	at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:412)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:412)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:968)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:400)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckOpenECFiles(TestFsck.java:702)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCK

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCK(TestFsck.java:1451)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckListCorruptSnapshotFiles

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckListCorruptSnapshotFiles(TestFsck.java:1863)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckPermission

Error Message:
org/apache/hadoop/security/ShellBasedUnixGroupsMapping$PartialGroupNameException

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/ShellBasedUnixGroupsMapping$PartialGroupNameException
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at java.lang.Class.getDeclaredConstructors0(Native Method)
	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
	at java.lang.Class.getConstructor0(Class.java:2803)
	at java.lang.Class.getDeclaredConstructor(Class.java:2053)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:80)
	at org.apache.hadoop.security.Groups.<init>(Groups.java:76)
	at org.apache.hadoop.security.UserGroupInformation$TestingGroups.<init>(UserGroupInformation.java:1409)
	at org.apache.hadoop.security.UserGroupInformation$TestingGroups.<init>(UserGroupInformation.java:1403)
	at org.apache.hadoop.security.UserGroupInformation.createUserForTesting(UserGroupInformation.java:1443)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckPermission(TestFsck.java:304)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckOpenFiles

Error Message:
org/apache/hadoop/security/authentication/client/ConnectionConfigurator

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.tools.DFSck.<init>(DFSck.java:130)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.runFsck(TestFsck.java:152)
	at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckOpenFiles(TestFsck.java:637)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls[0]

Error Message:
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-04-29 08:18:44,321

"IPC Server handler 8 on 27003" daemon prio=5 tid=52 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-0" daemon prio=5 tid=27 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Parameter Sending Thread #2" daemon prio=5 tid=131 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"client DomainSocketWatcher" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 38652"  prio=5 tid=103 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"1062543464@qtp-687848729-0" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 3 on 33611" daemon prio=5 tid=81 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@7675ec7c" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Timer-3" daemon prio=5 tid=61 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"DecommissionMonitor-0" daemon prio=5 tid=77 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"DecommissionMonitor-0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 38652" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 4 on 27003" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"37940094@qtp-1833266282-0" daemon prio=5 tid=25 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"AsyncAppender-Dispatcher-Thread-38" daemon prio=5 tid=54 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 27003" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-1" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@64399b3a" daemon prio=5 tid=33 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1640)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:271)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.waitForLogRollInSharedDir(TestEditLogTailer.java:199)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:178)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:146)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.junit.runners.Suite.runChild(Suite.java:127)
        at org.junit.runners.Suite.runChild(Suite.java:26)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"pool-8-thread-1"  prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=66 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 5 on 38652" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 2 on 33611" daemon prio=5 tid=80 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 5 on 33611" daemon prio=5 tid=83 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 1 on 38652" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 7 on 38652" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 6 on 27003" daemon prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 33611" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"ReplicationMonitor" daemon prio=5 tid=30 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@52a7e380" daemon prio=5 tid=76 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=31 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 27003" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=21 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 4 on 38652" daemon prio=5 tid=115 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 6 on 33611" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 27003"  prio=5 tid=36 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"IPC Server listener on 33611" daemon prio=5 tid=69 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@155d2584" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"CacheReplicationMonitor(904879612)"  prio=5 tid=141 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 8 on 38652" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@2ed4040b" daemon prio=5 tid=109 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 27003" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 33611"  prio=5 tid=70 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"process reaper" daemon prio=10 tid=9 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=99 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server listener on 38652" daemon prio=5 tid=102 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Server handler 1 on 27003" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@6e2e0dfb" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"DecommissionMonitor-0" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 27003" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server idle connection scanner for port 38652" daemon prio=5 tid=104 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"pool-11-thread-1"  prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@597a8430" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@27d07bf1" daemon prio=5 tid=137 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 4 on 33611" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"130004337@qtp-687848729-1 - Acceptor0 SelectChannelConnector@localhost:44027" daemon prio=5 tid=93 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 7 on 27003" daemon prio=5 tid=51 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"pool-3-thread-1"  prio=5 tid=24 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server Responder" daemon prio=5 tid=72 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"Timer-2" daemon prio=5 tid=29 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 9 on 33611" daemon prio=5 tid=87 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@2d6bf1ae" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"IPC Server handler 9 on 38652" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server Responder" daemon prio=5 tid=38 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@508892c2" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"Timer-4" daemon prio=5 tid=62 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Edit log tailer"  prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"Timer-6" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@4e3fe32c" daemon prio=5 tid=138 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 38652" daemon prio=5 tid=117 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"100295674@qtp-1833266282-1 - Acceptor0 SelectChannelConnector@localhost:46037" daemon prio=5 tid=26 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 0 on 38652" daemon prio=5 tid=111 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 33611" daemon prio=5 tid=86 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server Responder" daemon prio=5 tid=105 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"pool-6-thread-1"  prio=5 tid=58 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Timer-7" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"1732255792@qtp-28014348-1 - Acceptor0 SelectChannelConnector@localhost:33043" daemon prio=5 tid=60 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@71747997" daemon prio=5 tid=139 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=32 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 3 on 27003" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 3 on 38652" daemon prio=5 tid=114 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 33611" daemon prio=5 tid=78 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"pool-9-thread-1"  prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Edit log tailer"  prio=5 tid=56 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"IPC Server handler 1 on 33611" daemon prio=5 tid=79 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"ReplicationMonitor" daemon prio=5 tid=64 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"764458246@qtp-28014348-0" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 9 on 27003" daemon prio=5 tid=53 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-8" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@7703d473" daemon prio=5 tid=140 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=19 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3231)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 33611" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"pool-5-thread-1"  prio=5 tid=55 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"Timer-5" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server listener on 27003" daemon prio=5 tid=35 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)



Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-04-29 08:18:44,321

"IPC Server handler 8 on 27003" daemon prio=5 tid=52 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-0" daemon prio=5 tid=27 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Parameter Sending Thread #2" daemon prio=5 tid=131 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"client DomainSocketWatcher" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 38652"  prio=5 tid=103 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"1062543464@qtp-687848729-0" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 3 on 33611" daemon prio=5 tid=81 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@7675ec7c" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Timer-3" daemon prio=5 tid=61 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"DecommissionMonitor-0" daemon prio=5 tid=77 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"DecommissionMonitor-0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 38652" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 4 on 27003" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"37940094@qtp-1833266282-0" daemon prio=5 tid=25 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"AsyncAppender-Dispatcher-Thread-38" daemon prio=5 tid=54 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 27003" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-1" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@64399b3a" daemon prio=5 tid=33 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1640)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:271)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.waitForLogRollInSharedDir(TestEditLogTailer.java:199)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:178)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:146)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.junit.runners.Suite.runChild(Suite.java:127)
        at org.junit.runners.Suite.runChild(Suite.java:26)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"pool-8-thread-1"  prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=66 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 5 on 38652" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 2 on 33611" daemon prio=5 tid=80 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 5 on 33611" daemon prio=5 tid=83 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 1 on 38652" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 7 on 38652" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 6 on 27003" daemon prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 33611" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"ReplicationMonitor" daemon prio=5 tid=30 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@52a7e380" daemon prio=5 tid=76 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=31 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 27003" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=21 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 4 on 38652" daemon prio=5 tid=115 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 6 on 33611" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 27003"  prio=5 tid=36 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"IPC Server listener on 33611" daemon prio=5 tid=69 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@155d2584" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"CacheReplicationMonitor(904879612)"  prio=5 tid=141 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 8 on 38652" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@2ed4040b" daemon prio=5 tid=109 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 27003" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 33611"  prio=5 tid=70 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"process reaper" daemon prio=10 tid=9 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=99 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server listener on 38652" daemon prio=5 tid=102 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Server handler 1 on 27003" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@6e2e0dfb" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"DecommissionMonitor-0" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 27003" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server idle connection scanner for port 38652" daemon prio=5 tid=104 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"pool-11-thread-1"  prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@597a8430" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@27d07bf1" daemon prio=5 tid=137 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 4 on 33611" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"130004337@qtp-687848729-1 - Acceptor0 SelectChannelConnector@localhost:44027" daemon prio=5 tid=93 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 7 on 27003" daemon prio=5 tid=51 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"pool-3-thread-1"  prio=5 tid=24 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server Responder" daemon prio=5 tid=72 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"Timer-2" daemon prio=5 tid=29 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 9 on 33611" daemon prio=5 tid=87 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@2d6bf1ae" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"IPC Server handler 9 on 38652" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server Responder" daemon prio=5 tid=38 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@508892c2" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"Timer-4" daemon prio=5 tid=62 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Edit log tailer"  prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"Timer-6" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@4e3fe32c" daemon prio=5 tid=138 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 38652" daemon prio=5 tid=117 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"100295674@qtp-1833266282-1 - Acceptor0 SelectChannelConnector@localhost:46037" daemon prio=5 tid=26 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 0 on 38652" daemon prio=5 tid=111 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 33611" daemon prio=5 tid=86 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server Responder" daemon prio=5 tid=105 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"pool-6-thread-1"  prio=5 tid=58 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Timer-7" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"1732255792@qtp-28014348-1 - Acceptor0 SelectChannelConnector@localhost:33043" daemon prio=5 tid=60 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@71747997" daemon prio=5 tid=139 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=32 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 3 on 27003" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 3 on 38652" daemon prio=5 tid=114 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 33611" daemon prio=5 tid=78 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"pool-9-thread-1"  prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Edit log tailer"  prio=5 tid=56 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"IPC Server handler 1 on 33611" daemon prio=5 tid=79 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"ReplicationMonitor" daemon prio=5 tid=64 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"764458246@qtp-28014348-0" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 9 on 27003" daemon prio=5 tid=53 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-8" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@7703d473" daemon prio=5 tid=140 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=19 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3231)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 33611" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"pool-5-thread-1"  prio=5 tid=55 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"Timer-5" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server listener on 27003" daemon prio=5 tid=35 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)


	at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:271)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.waitForLogRollInSharedDir(TestEditLogTailer.java:199)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:178)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:146)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":40339; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":40339; 
	at sun.security.krb5.KdcComm.send(KdcComm.java:250)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:415)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:797)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:792)
	at org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:415)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1505)
	at org.apache.hadoop.ipc.Client.call(Client.java:1375)
	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy26.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:206)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:146)


FAILED:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithRename

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1994)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithRename(TestOpenFilesWithSnapshot.java:210)




Hadoop-Hdfs-trunk - Build # 3085 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3085/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6246 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:05 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [56:48 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.098 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-04-29T23:06:41+00:00
[INFO] Final Memory: 58M/755M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDFSClientRetries.testNamenodeRestart

Error Message:
test timed out after 300000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 300000 milliseconds
	at org.apache.hadoop.ipc.Client$Connection.addCall(Client.java:514)
	at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:415)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1375)
	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:784)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1630)
	at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1324)
	at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1321)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1321)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:1087)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.testNamenodeRestart(TestDFSClientRetries.java:953)




Hadoop-Hdfs-trunk - Build # 3086 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3086/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6824 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:39 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:41 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.145 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:47 h
[INFO] Finished at: 2016-04-30T07:36:16+00:00
[INFO] Final Memory: 66M/878M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDataTransferKeepalive.testClientResponsesKeepAliveTimeout

Error Message:
Expected 0 xceivers, found 2

Stack Trace:
java.lang.AssertionError: Expected 0 xceivers, found 2
	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.hdfs.TestDataTransferKeepalive.assertXceiverCount(TestDataTransferKeepalive.java:250)
	at org.apache.hadoop.hdfs.TestDataTransferKeepalive.testClientResponsesKeepAliveTimeout(TestDataTransferKeepalive.java:144)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback(TestRollingUpgrade.java:351)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA

Error Message:
test timed out after 20000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 20000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA(TestDataNodeMultipleRegistrations.java:253)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure

Error Message:
There is no under replicated block after volume failure

Stack Trace:
java.lang.AssertionError: There is no under replicated block after volume failure
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure(TestDataNodeVolumeFailure.java:412)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture

Error Message:
expected:<17> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<17> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture(TestNameNodeMetadataConsistency.java:113)


FAILED:  org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testAuthUrlReadTimeout[timeoutSource=Configuration]

Error Message:
Expected to find 'localhost:44853: Read timed out' but got unexpected exception:java.net.SocketTimeoutException: localhost:44853: null
 at java.net.SocksSocketImpl.remainingMillis(SocksSocketImpl.java:111)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:579)
 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
 at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
 at sun.net.www.http.HttpClient.New(HttpClient.java:308)
 at sun.net.www.http.HttpClient.New(HttpClient.java:326)
 at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
 at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
 at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:684)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:637)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:709)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1458)
 at org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testAuthUrlReadTimeout(TestWebHdfsTimeouts.java:195)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)


Stack Trace:
java.lang.AssertionError: Expected to find 'localhost:44853: Read timed out' but got unexpected exception:java.net.SocketTimeoutException: localhost:44853: null
	at java.net.SocksSocketImpl.remainingMillis(SocksSocketImpl.java:111)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:579)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:684)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:637)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:709)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1458)
	at org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testAuthUrlReadTimeout(TestWebHdfsTimeouts.java:195)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

	at java.net.SocksSocketImpl.remainingMillis(SocksSocketImpl.java:111)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:579)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:684)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:637)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:709)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1458)
	at org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testAuthUrlReadTimeout(TestWebHdfsTimeouts.java:195)




Hadoop-Hdfs-trunk - Build # 3087 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3087/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6802 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:22 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:39 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.119 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:44 h
[INFO] Finished at: 2016-04-30T21:33:27+00:00
[INFO] Final Memory: 58M/795M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
7 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:657)


FAILED:  org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM.testSecureMode

Error Message:
test timed out after 30000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1339)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
	at org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM.restartNameNode(TestSecureNNWithQJM.java:205)
	at org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM.doNNWithQJMTest(TestSecureNNWithQJM.java:193)
	at org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM.testSecureMode(TestSecureNNWithQJM.java:167)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten

Error Message:
expected:<0> but was:<10506638115>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<10506638115>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:694)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:609)


FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testRemoveVolumeBeingWritten

Error Message:
test timed out after 30000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30000 milliseconds
	at sun.misc.Unsafe.park(Native Method)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
	at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testRemoveVolumeBeingWritten(TestFsDatasetImpl.java:630)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount

Error Message:
expected:<26.0> but was:<27.0>

Stack Trace:
java.lang.AssertionError: expected:<26.0> but was:<27.0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:494)
	at org.junit.Assert.assertEquals(Assert.java:592)
	at org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.checkClusterHealth(TestNamenodeCapacityReport.java:320)
	at org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:280)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":42196; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":42196; 
	at sun.security.krb5.KdcComm.send(KdcComm.java:250)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:615)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:415)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:797)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:793)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:792)
	at org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:415)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1505)
	at org.apache.hadoop.ipc.Client.call(Client.java:1375)
	at org.apache.hadoop.ipc.Client.call(Client.java:1353)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy26.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:206)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:146)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure

Error Message:
There is no under replicated block after volume failure

Stack Trace:
java.lang.AssertionError: There is no under replicated block after volume failure
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure(TestDataNodeVolumeFailure.java:412)




Hadoop-Hdfs-trunk - Build # 3089 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3089/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10137 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:12 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:39 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.105 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:44 h
[INFO] Finished at: 2016-05-03T00:42:44+00:00
[INFO] Final Memory: 59M/804M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
14 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite

Error Message:
test timed out after 120000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 120000 milliseconds
	at java.lang.Object.wait(Native Method)
	at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:221)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:184)


FAILED:  org.apache.hadoop.hdfs.TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite

Error Message:
test timed out after 60000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 60000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:825)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:784)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:221)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:191)


FAILED:  org.apache.hadoop.hdfs.TestAsyncDFSRename.testAsyncRenameWithOverwrite

Error Message:
Cannot remove data directory: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/datapath '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace': 
 absolute:/home/jenkins/jenkins-slave/workspace
 permissions: drwx
path '/home/jenkins/jenkins-slave': 
 absolute:/home/jenkins/jenkins-slave
 permissions: drwx
path '/home/jenkins': 
 absolute:/home/jenkins
 permissions: drwx
path '/home': 
 absolute:/home
 permissions: dr-x
path '/': 
 absolute:/
 permissions: dr-x


Stack Trace:
java.io.IOException: Cannot remove data directory: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/datapath '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace': 
	absolute:/home/jenkins/jenkins-slave/workspace
	permissions: drwx
path '/home/jenkins/jenkins-slave': 
	absolute:/home/jenkins/jenkins-slave
	permissions: drwx
path '/home/jenkins': 
	absolute:/home/jenkins
	permissions: drwx
path '/home': 
	absolute:/home
	permissions: dr-x
path '/': 
	absolute:/
	permissions: dr-x

	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:834)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.testAsyncRenameWithOverwrite(TestAsyncDFSRename.java:56)


FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback(TestRollingUpgrade.java:351)


FAILED:  org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.testStripedFile0

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertFalse(Assert.java:64)
	at org.junit.Assert.assertFalse(Assert.java:74)
	at org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.doTest(TestSafeModeWithStripedFile.java:156)
	at org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.testStripedFile0(TestSafeModeWithStripedFile.java:76)


FAILED:  org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs

Error Message:
test timed out after 5000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 5000 milliseconds
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:798)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:430)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:108)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:910)
	at org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:170)


FAILED:  org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testDatanodeCursor

Error Message:
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-05-02 11:31:50,455

"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3c7021ae" daemon prio=5 tid=1524 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3fa3e618" daemon prio=5 tid=1340 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Timer-54" daemon prio=5 tid=1529 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"LeaseRenewer:jenkins@localhost:56041" daemon prio=5 tid=584 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@43768c6c" daemon prio=5 tid=699 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server Responder" daemon prio=5 tid=1537 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"IPC Server Responder" daemon prio=5 tid=1355 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"DecommissionMonitor-0" daemon prio=5 tid=718 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 44696" daemon prio=5 tid=764 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 9 on 54968" daemon prio=5 tid=1370 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2d3e9a1f" daemon prio=5 tid=738 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=1347 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"CacheReplicationMonitor(767074210)"  prio=5 tid=1377 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 1 on 35055" daemon prio=5 tid=721 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 44696" daemon prio=5 tid=766 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1)" daemon prio=5 tid=769 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"725077719@qtp-1551594890-1 - Acceptor0 SelectChannelConnector@localhost:43470" daemon prio=5 tid=1343 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/current/BP-1448114710-67.195.81.153-1462231867276" daemon prio=5 tid=772 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:158)
        at java.lang.Thread.run(Thread.java:745)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"Timer-56" daemon prio=5 tid=1531 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"Socket Reader #1 for port 35055"  prio=5 tid=712 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"IPC Server handler 7 on 44696" daemon prio=5 tid=765 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"java.util.concurrent.ThreadPoolExecutor$Worker@56757a77[State = -1, empty queue]" daemon prio=5 tid=776 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"387106665@qtp-621486673-0" daemon prio=5 tid=741 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"main"  prio=5 tid=1 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1289)
        at org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
        at org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"Socket Reader #1 for port 44696"  prio=5 tid=749 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3f807a01" daemon prio=5 tid=1374 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"pool-86-thread-1"  prio=5 tid=1371 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"LeaseRenewer:jenkins@localhost:54968" daemon prio=5 tid=1418 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"Timer-26" daemon prio=5 tid=705 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Timer-29" daemon prio=5 tid=745 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 4 on 35055" daemon prio=5 tid=724 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Thread-798"  prio=5 tid=1332 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1640)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:269)
        at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testDatanodeCursor(TestBlockScanner.java:572)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6c92b1a1" daemon prio=5 tid=709 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 54968"  prio=5 tid=1353 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1)" daemon prio=5 tid=1553 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"AsyncAppender-Dispatcher-Thread-37" daemon prio=5 tid=53 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"1895956523@qtp-1367291893-0" daemon prio=5 tid=701 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 2 on 59927" daemon prio=5 tid=1544 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 7 on 59927" daemon prio=5 tid=1549 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@7c01ddfd" daemon prio=5 tid=1533 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 35055" daemon prio=5 tid=725 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@6722b3ca" daemon prio=5 tid=717 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 59927" daemon prio=5 tid=1547 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@78c76c3a" daemon prio=5 tid=1375 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"Timer-28" daemon prio=5 tid=744 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"LeaseRenewer:jenkins@localhost:49490" daemon prio=5 tid=214 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"pool-51-thread-1"  prio=5 tid=755 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 44696" daemon prio=5 tid=763 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-49" daemon prio=5 tid=1345 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"pool-84-thread-1"  prio=5 tid=1341 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@21c10295" daemon prio=5 tid=1350 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"Timer-25" daemon prio=5 tid=704 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 0 on 35055" daemon prio=5 tid=720 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=760 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server listener on 35055" daemon prio=5 tid=711 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"nioEventLoopGroup-8-1"  prio=10 tid=746 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 54968" daemon prio=5 tid=1368 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@265ae4b8" daemon prio=5 tid=747 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 54968" daemon prio=5 tid=1364 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 9 on 35055" daemon prio=5 tid=729 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server listener on 44696" daemon prio=5 tid=748 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Client (415880203) connection to localhost/127.0.0.1:54968 from jenkins" daemon prio=5 tid=1407 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1009)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1054)
"pool-92-thread-1"  prio=5 tid=1526 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server Responder" daemon prio=5 tid=751 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"LeaseRenewer:jenkins@localhost:46237" daemon prio=5 tid=1246 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@695330d" daemon prio=5 tid=735 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 8 on 59927" daemon prio=5 tid=1550 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 2 on 44696" daemon prio=5 tid=759 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"LeaseRenewer:jenkins@localhost:35055" daemon prio=5 tid=778 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"968575458@qtp-1367291893-1 - Acceptor0 SelectChannelConnector@localhost:40722" daemon prio=5 tid=702 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Timer-24" daemon prio=5 tid=703 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@692dbf7c" daemon prio=5 tid=1358 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 54968" daemon prio=5 tid=1367 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 59927" daemon prio=5 tid=1536 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 3 on 59927" daemon prio=5 tid=1545 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"LeaseRenewer:jenkins@localhost:35497" daemon prio=5 tid=884 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 54968" daemon prio=5 tid=1361 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Client (415880203) connection to localhost/127.0.0.1:35055 from jenkins" daemon prio=5 tid=756 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1009)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1054)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/]]  heartbeating to localhost/127.0.0.1:54968" daemon prio=5 tid=1540 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:617)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:737)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 54968" daemon prio=5 tid=1366 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 59927"  prio=5 tid=1535 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"pool-50-thread-1"  prio=5 tid=740 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@aa11928" daemon prio=5 tid=1376 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 35055" daemon prio=5 tid=722 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 35055" daemon prio=5 tid=713 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5116a644" daemon prio=5 tid=1373 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/]]  heartbeating to localhost/127.0.0.1:35055" daemon prio=5 tid=754 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:617)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:737)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 4 on 44696" daemon prio=5 tid=762 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 1 on 44696" daemon prio=5 tid=758 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"datanode DomainSocketWatcher" daemon prio=5 tid=739 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"1051359947@qtp-621486673-1 - Acceptor0 SelectChannelConnector@localhost:38368" daemon prio=5 tid=742 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Timer-55" daemon prio=5 tid=1530 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=151 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3230)
        at java.lang.Thread.run(Thread.java:745)
"pool-93-thread-1"  prio=5 tid=1541 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=708 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"251618136@qtp-1888014391-0" daemon prio=5 tid=1527 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"142048293@qtp-1888014391-1 - Acceptor0 SelectChannelConnector@localhost:37853" daemon prio=5 tid=1528 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 6 on 35055" daemon prio=5 tid=726 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@33b88372" daemon prio=5 tid=734 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 54968" daemon prio=5 tid=1362 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 59927" daemon prio=5 tid=1542 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"CacheReplicationMonitor(1372058001)"  prio=5 tid=736 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 9 on 59927" daemon prio=5 tid=1551 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"LeaseRenewer:jenkins@localhost:45265" daemon prio=5 tid=1073 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"355583882@qtp-1551594890-0" daemon prio=5 tid=1342 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"Block report processor" daemon prio=5 tid=1349 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 4 on 54968" daemon prio=5 tid=1365 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"AsyncAppender-Dispatcher-Thread-131" daemon prio=5 tid=174 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 59927" daemon prio=5 tid=1548 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"client DomainSocketWatcher" daemon prio=5 tid=192 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"Timer-27" daemon prio=5 tid=743 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=698 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"DecommissionMonitor-0" daemon prio=5 tid=1359 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Timer-48" daemon prio=5 tid=1344 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"StorageInfoMonitor" daemon prio=5 tid=707 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server listener on 54968" daemon prio=5 tid=1352 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/current/BP-245391040-67.195.81.153-1462231876550" daemon prio=5 tid=1555 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:158)
        at java.lang.Thread.run(Thread.java:745)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=874 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 44696" daemon prio=5 tid=761 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 44696" daemon prio=5 tid=757 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"datanode DomainSocketWatcher" daemon prio=5 tid=1525 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 54968" daemon prio=5 tid=1363 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5e27c545" daemon prio=5 tid=733 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 35055" daemon prio=5 tid=723 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 7 on 35055" daemon prio=5 tid=727 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 9 on 44696" daemon prio=5 tid=767 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 35055" daemon prio=5 tid=728 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"StorageInfoMonitor" daemon prio=5 tid=1348 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 8 on 54968" daemon prio=5 tid=1369 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.PeerCache@5aa153ee" daemon prio=5 tid=626 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253)
        at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46)
        at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 54968" daemon prio=5 tid=1354 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"nioEventLoopGroup-18-1"  prio=10 tid=1532 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=706 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server listener on 59927" daemon prio=5 tid=1534 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@27d3fee4" daemon prio=5 tid=732 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 4 on 59927" daemon prio=5 tid=1546 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"java.util.concurrent.ThreadPoolExecutor$Worker@24d31460[State = -1, empty queue]" daemon prio=5 tid=1559 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 59927" daemon prio=5 tid=1543 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-50" daemon prio=5 tid=1346 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server idle connection scanner for port 44696" daemon prio=5 tid=750 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server Responder" daemon prio=5 tid=714 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"pool-47-thread-1"  prio=5 tid=700 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"pool-49-thread-1"  prio=5 tid=730 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"process reaper" daemon prio=10 tid=10 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)



Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-05-02 11:31:50,455

"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3c7021ae" daemon prio=5 tid=1524 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3fa3e618" daemon prio=5 tid=1340 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Timer-54" daemon prio=5 tid=1529 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"LeaseRenewer:jenkins@localhost:56041" daemon prio=5 tid=584 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@43768c6c" daemon prio=5 tid=699 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server Responder" daemon prio=5 tid=1537 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"IPC Server Responder" daemon prio=5 tid=1355 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"DecommissionMonitor-0" daemon prio=5 tid=718 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 44696" daemon prio=5 tid=764 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 9 on 54968" daemon prio=5 tid=1370 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2d3e9a1f" daemon prio=5 tid=738 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
        at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100)
        at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=1347 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"CacheReplicationMonitor(767074210)"  prio=5 tid=1377 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 1 on 35055" daemon prio=5 tid=721 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 44696" daemon prio=5 tid=766 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1)" daemon prio=5 tid=769 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"725077719@qtp-1551594890-1 - Acceptor0 SelectChannelConnector@localhost:43470" daemon prio=5 tid=1343 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/current/BP-1448114710-67.195.81.153-1462231867276" daemon prio=5 tid=772 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:158)
        at java.lang.Thread.run(Thread.java:745)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"Timer-56" daemon prio=5 tid=1531 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"Socket Reader #1 for port 35055"  prio=5 tid=712 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"IPC Server handler 7 on 44696" daemon prio=5 tid=765 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"java.util.concurrent.ThreadPoolExecutor$Worker@56757a77[State = -1, empty queue]" daemon prio=5 tid=776 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"387106665@qtp-621486673-0" daemon prio=5 tid=741 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"main"  prio=5 tid=1 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1289)
        at org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
        at org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"Socket Reader #1 for port 44696"  prio=5 tid=749 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3f807a01" daemon prio=5 tid=1374 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"pool-86-thread-1"  prio=5 tid=1371 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"LeaseRenewer:jenkins@localhost:54968" daemon prio=5 tid=1418 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"Timer-26" daemon prio=5 tid=705 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Timer-29" daemon prio=5 tid=745 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 4 on 35055" daemon prio=5 tid=724 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Thread-798"  prio=5 tid=1332 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1640)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:269)
        at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testDatanodeCursor(TestBlockScanner.java:572)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6c92b1a1" daemon prio=5 tid=709 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 54968"  prio=5 tid=1353 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1)" daemon prio=5 tid=1553 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"AsyncAppender-Dispatcher-Thread-37" daemon prio=5 tid=53 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"1895956523@qtp-1367291893-0" daemon prio=5 tid=701 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 2 on 59927" daemon prio=5 tid=1544 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 7 on 59927" daemon prio=5 tid=1549 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@7c01ddfd" daemon prio=5 tid=1533 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 35055" daemon prio=5 tid=725 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@6722b3ca" daemon prio=5 tid=717 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 59927" daemon prio=5 tid=1547 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@78c76c3a" daemon prio=5 tid=1375 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"Timer-28" daemon prio=5 tid=744 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"LeaseRenewer:jenkins@localhost:49490" daemon prio=5 tid=214 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"pool-51-thread-1"  prio=5 tid=755 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 44696" daemon prio=5 tid=763 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-49" daemon prio=5 tid=1345 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"pool-84-thread-1"  prio=5 tid=1341 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@21c10295" daemon prio=5 tid=1350 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"Timer-25" daemon prio=5 tid=704 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 0 on 35055" daemon prio=5 tid=720 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=760 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server listener on 35055" daemon prio=5 tid=711 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"nioEventLoopGroup-8-1"  prio=10 tid=746 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 54968" daemon prio=5 tid=1368 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@265ae4b8" daemon prio=5 tid=747 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 54968" daemon prio=5 tid=1364 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 9 on 35055" daemon prio=5 tid=729 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server listener on 44696" daemon prio=5 tid=748 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Client (415880203) connection to localhost/127.0.0.1:54968 from jenkins" daemon prio=5 tid=1407 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1009)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1054)
"pool-92-thread-1"  prio=5 tid=1526 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server Responder" daemon prio=5 tid=751 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"LeaseRenewer:jenkins@localhost:46237" daemon prio=5 tid=1246 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@695330d" daemon prio=5 tid=735 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 8 on 59927" daemon prio=5 tid=1550 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 2 on 44696" daemon prio=5 tid=759 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"LeaseRenewer:jenkins@localhost:35055" daemon prio=5 tid=778 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"968575458@qtp-1367291893-1 - Acceptor0 SelectChannelConnector@localhost:40722" daemon prio=5 tid=702 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Timer-24" daemon prio=5 tid=703 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@692dbf7c" daemon prio=5 tid=1358 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 54968" daemon prio=5 tid=1367 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 59927" daemon prio=5 tid=1536 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 3 on 59927" daemon prio=5 tid=1545 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"LeaseRenewer:jenkins@localhost:35497" daemon prio=5 tid=884 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 54968" daemon prio=5 tid=1361 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Client (415880203) connection to localhost/127.0.0.1:35055 from jenkins" daemon prio=5 tid=756 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1009)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1054)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/]]  heartbeating to localhost/127.0.0.1:54968" daemon prio=5 tid=1540 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:617)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:737)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 54968" daemon prio=5 tid=1366 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 59927"  prio=5 tid=1535 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"pool-50-thread-1"  prio=5 tid=740 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@aa11928" daemon prio=5 tid=1376 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 35055" daemon prio=5 tid=722 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 35055" daemon prio=5 tid=713 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5116a644" daemon prio=5 tid=1373 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"DataNode: [[[DISK]file:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/]]  heartbeating to localhost/127.0.0.1:35055" daemon prio=5 tid=754 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:130)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:617)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:737)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 4 on 44696" daemon prio=5 tid=762 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 1 on 44696" daemon prio=5 tid=758 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"datanode DomainSocketWatcher" daemon prio=5 tid=739 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"1051359947@qtp-621486673-1 - Acceptor0 SelectChannelConnector@localhost:38368" daemon prio=5 tid=742 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"Timer-55" daemon prio=5 tid=1530 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=151 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3230)
        at java.lang.Thread.run(Thread.java:745)
"pool-93-thread-1"  prio=5 tid=1541 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=708 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"251618136@qtp-1888014391-0" daemon prio=5 tid=1527 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"142048293@qtp-1888014391-1 - Acceptor0 SelectChannelConnector@localhost:37853" daemon prio=5 tid=1528 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 6 on 35055" daemon prio=5 tid=726 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@33b88372" daemon prio=5 tid=734 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 54968" daemon prio=5 tid=1362 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 59927" daemon prio=5 tid=1542 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"CacheReplicationMonitor(1372058001)"  prio=5 tid=736 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 9 on 59927" daemon prio=5 tid=1551 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"LeaseRenewer:jenkins@localhost:45265" daemon prio=5 tid=1073 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:437)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$700(LeaseRenewer.java:76)
        at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:310)
        at java.lang.Thread.run(Thread.java:745)
"355583882@qtp-1551594890-0" daemon prio=5 tid=1342 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"Block report processor" daemon prio=5 tid=1349 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 4 on 54968" daemon prio=5 tid=1365 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"AsyncAppender-Dispatcher-Thread-131" daemon prio=5 tid=174 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 6 on 59927" daemon prio=5 tid=1548 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"client DomainSocketWatcher" daemon prio=5 tid=192 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"Timer-27" daemon prio=5 tid=743 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=698 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"DecommissionMonitor-0" daemon prio=5 tid=1359 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Timer-48" daemon prio=5 tid=1344 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"StorageInfoMonitor" daemon prio=5 tid=707 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server listener on 54968" daemon prio=5 tid=1352 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"refreshUsed-/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data1/current/BP-245391040-67.195.81.153-1462231876550" daemon prio=5 tid=1555 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:158)
        at java.lang.Thread.run(Thread.java:745)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=874 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 44696" daemon prio=5 tid=761 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 44696" daemon prio=5 tid=757 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"datanode DomainSocketWatcher" daemon prio=5 tid=1525 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 54968" daemon prio=5 tid=1363 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5e27c545" daemon prio=5 tid=733 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 35055" daemon prio=5 tid=723 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 7 on 35055" daemon prio=5 tid=727 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 9 on 44696" daemon prio=5 tid=767 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 35055" daemon prio=5 tid=728 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"StorageInfoMonitor" daemon prio=5 tid=1348 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 8 on 54968" daemon prio=5 tid=1369 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.PeerCache@5aa153ee" daemon prio=5 tid=626 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253)
        at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46)
        at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 54968" daemon prio=5 tid=1354 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"nioEventLoopGroup-18-1"  prio=10 tid=1532 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=706 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server listener on 59927" daemon prio=5 tid=1534 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@27d3fee4" daemon prio=5 tid=732 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 4 on 59927" daemon prio=5 tid=1546 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"java.util.concurrent.ThreadPoolExecutor$Worker@24d31460[State = -1, empty queue]" daemon prio=5 tid=1559 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 59927" daemon prio=5 tid=1543 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-50" daemon prio=5 tid=1346 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server idle connection scanner for port 44696" daemon prio=5 tid=750 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server Responder" daemon prio=5 tid=714 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"pool-47-thread-1"  prio=5 tid=700 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"pool-49-thread-1"  prio=5 tid=730 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"process reaper" daemon prio=10 tid=10 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


	at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:269)
	at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testDatanodeCursor(TestBlockScanner.java:572)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart

Error Message:
Timed out waiting for /test/testTruncateWithDataNodesRestart to reach 3 replicas

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for /test/testTruncateWithDataNodesRestart to reach 3 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:771)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart(TestFileTruncate.java:704)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture

Error Message:
expected:<17> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<17> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture(TestNameNodeMetadataConsistency.java:113)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes(TestDFSUpgradeWithHA.java:687)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls[0]

Error Message:
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-05-03 12:08:23,670

"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@859a977" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@5380477a" daemon prio=5 tid=138 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 9 on 46911" daemon prio=5 tid=53 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"246124803@qtp-550214444-0" daemon prio=5 tid=25 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=19 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3230)
        at java.lang.Thread.run(Thread.java:745)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=21 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server listener on 4579" daemon prio=5 tid=69 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"1271135565@qtp-339740700-1 - Acceptor0 SelectChannelConnector@localhost:42205" daemon prio=5 tid=93 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"pool-5-thread-1"  prio=5 tid=55 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 29157" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"client DomainSocketWatcher" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 46911" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@5a102e69" daemon prio=5 tid=109 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 46911"  prio=5 tid=36 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"pool-12-thread-1"  prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1640)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:269)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.waitForLogRollInSharedDir(TestEditLogTailer.java:199)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:178)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:146)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.junit.runners.Suite.runChild(Suite.java:127)
        at org.junit.runners.Suite.runChild(Suite.java:26)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"Timer-6" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 6 on 46911" daemon prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-4" daemon prio=5 tid=62 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 5 on 29157" daemon prio=5 tid=117 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@1520a43d" daemon prio=5 tid=76 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 46911" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@79047c8d" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"pool-9-thread-1"  prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 4579" daemon prio=5 tid=81 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"DecommissionMonitor-0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Timer-7" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 7 on 46911" daemon prio=5 tid=51 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 29157" daemon prio=5 tid=104 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 4 on 29157" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Block report processor" daemon prio=5 tid=66 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server Responder" daemon prio=5 tid=38 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"Block report processor" daemon prio=5 tid=32 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3c04b8b3" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 4579" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"416166283@qtp-550214444-1 - Acceptor0 SelectChannelConnector@localhost:58443" daemon prio=5 tid=26 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 1 on 46911" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server Responder" daemon prio=5 tid=72 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@2cbd7d77" daemon prio=5 tid=136 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 4579" daemon prio=5 tid=83 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 4579" daemon prio=5 tid=78 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-1" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@6758cd8" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"pool-3-thread-1"  prio=5 tid=24 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"pool-8-thread-1"  prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 29157" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-2" daemon prio=5 tid=29 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Timer-3" daemon prio=5 tid=61 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server Responder" daemon prio=5 tid=105 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"IPC Server listener on 46911" daemon prio=5 tid=35 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 46911" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Edit log tailer"  prio=5 tid=123 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"Timer-5" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server listener on 29157" daemon prio=5 tid=102 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Server handler 6 on 4579" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 29157" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 6 on 29157" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"2068215434@qtp-339740700-0" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 3 on 29157" daemon prio=5 tid=115 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"2029635314@qtp-314635482-1 - Acceptor0 SelectChannelConnector@localhost:37412" daemon prio=5 tid=60 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 9 on 4579" daemon prio=5 tid=87 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"ReplicationMonitor" daemon prio=5 tid=30 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"Timer-8" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"ReplicationMonitor" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"Edit log tailer"  prio=5 tid=56 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"IPC Server handler 4 on 46911" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 4579" daemon prio=5 tid=86 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 4 on 4579" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"DecommissionMonitor-0" daemon prio=5 tid=77 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"pool-6-thread-1"  prio=5 tid=58 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 46911" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"DecommissionMonitor-0" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 4579" daemon prio=5 tid=79 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"process reaper" daemon prio=10 tid=9 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 9 on 29157" daemon prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"CacheReplicationMonitor(1955777173)"  prio=5 tid=140 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 8 on 46911" daemon prio=5 tid=52 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5cfd1e1" daemon prio=5 tid=139 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 29157"  prio=5 tid=103 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@518b3cb7" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@31b52301" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=99 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 3 on 46911" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"IPC Server handler 7 on 4579" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 2 on 4579" daemon prio=5 tid=80 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"StorageInfoMonitor" daemon prio=5 tid=31 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 29157" daemon prio=5 tid=114 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@619e65ae" daemon prio=5 tid=33 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"IPC Parameter Sending Thread #2" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"172759815@qtp-314635482-0" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 1 on 29157" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 4579"  prio=5 tid=70 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=127 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=64 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@2ff9ffb2" daemon prio=5 tid=137 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"AsyncAppender-Dispatcher-Thread-38" daemon prio=5 tid=54 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"Timer-0" daemon prio=5 tid=27 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)



Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-05-03 12:08:23,670

"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@859a977" daemon prio=5 tid=67 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@5380477a" daemon prio=5 tid=138 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3818)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 9 on 46911" daemon prio=5 tid=53 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"246124803@qtp-550214444-0" daemon prio=5 tid=25 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=19 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3230)
        at java.lang.Thread.run(Thread.java:745)
"Timer for 'NameNode' metrics system" daemon prio=5 tid=21 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server listener on 4579" daemon prio=5 tid=69 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"1271135565@qtp-339740700-1 - Acceptor0 SelectChannelConnector@localhost:42205" daemon prio=5 tid=93 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"pool-5-thread-1"  prio=5 tid=55 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 29157" daemon prio=5 tid=119 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"client DomainSocketWatcher" daemon prio=5 tid=129 runnable
java.lang.Thread.State: RUNNABLE
        at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52)
        at org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:511)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 46911" daemon prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@5a102e69" daemon prio=5 tid=109 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 46911"  prio=5 tid=36 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"pool-12-thread-1"  prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"main"  prio=5 tid=1 runnable
java.lang.Thread.State: RUNNABLE
        at java.lang.Thread.dumpThreads(Native Method)
        at java.lang.Thread.getAllStackTraces(Thread.java:1640)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
        at org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
        at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:269)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.waitForLogRollInSharedDir(TestEditLogTailer.java:199)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:178)
        at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:146)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.junit.runners.Suite.runChild(Suite.java:127)
        at org.junit.runners.Suite.runChild(Suite.java:26)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
        at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
        at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
        at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
        at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
        at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"Reference Handler" daemon prio=10 tid=2 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"Timer-6" daemon prio=5 tid=94 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 6 on 46911" daemon prio=5 tid=50 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-4" daemon prio=5 tid=62 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 5 on 29157" daemon prio=5 tid=117 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@1520a43d" daemon prio=5 tid=76 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 46911" daemon prio=5 tid=44 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@79047c8d" daemon prio=5 tid=22 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"pool-9-thread-1"  prio=5 tid=91 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 4579" daemon prio=5 tid=81 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"DecommissionMonitor-0" daemon prio=5 tid=110 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"Timer-7" daemon prio=5 tid=95 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 7 on 46911" daemon prio=5 tid=51 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 29157" daemon prio=5 tid=104 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 4 on 29157" daemon prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Block report processor" daemon prio=5 tid=66 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server Responder" daemon prio=5 tid=38 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"Block report processor" daemon prio=5 tid=32 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@3c04b8b3" daemon prio=5 tid=57 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=98 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 4579" daemon prio=5 tid=71 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"416166283@qtp-550214444-1 - Acceptor0 SelectChannelConnector@localhost:58443" daemon prio=5 tid=26 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 1 on 46911" daemon prio=5 tid=45 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server Responder" daemon prio=5 tid=72 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@2cbd7d77" daemon prio=5 tid=136 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:339)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 5 on 4579" daemon prio=5 tid=83 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 0 on 4579" daemon prio=5 tid=78 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-1" daemon prio=5 tid=28 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@6758cd8" daemon prio=5 tid=90 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"pool-3-thread-1"  prio=5 tid=24 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"pool-8-thread-1"  prio=5 tid=88 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 29157" daemon prio=5 tid=112 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-2" daemon prio=5 tid=29 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Timer-3" daemon prio=5 tid=61 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server Responder" daemon prio=5 tid=105 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1079)
        at org.apache.hadoop.ipc.Server$Responder.run(Server.java:1062)
"IPC Server listener on 46911" daemon prio=5 tid=35 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Parameter Sending Thread #0" daemon prio=5 tid=124 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server idle connection scanner for port 46911" daemon prio=5 tid=37 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"Edit log tailer"  prio=5 tid=123 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"Timer-5" daemon prio=5 tid=63 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server listener on 29157" daemon prio=5 tid=102 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
"IPC Server handler 6 on 4579" daemon prio=5 tid=84 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 29157" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 6 on 29157" daemon prio=5 tid=118 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"2068215434@qtp-339740700-0" daemon prio=5 tid=92 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 3 on 29157" daemon prio=5 tid=115 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"2029635314@qtp-314635482-1 - Acceptor0 SelectChannelConnector@localhost:37412" daemon prio=5 tid=60 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
        at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
        at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
        at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
"IPC Server handler 9 on 4579" daemon prio=5 tid=87 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"ReplicationMonitor" daemon prio=5 tid=30 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"Timer-8" daemon prio=5 tid=96 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"ReplicationMonitor" daemon prio=5 tid=97 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"Edit log tailer"  prio=5 tid=56 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:389)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)
"IPC Server handler 4 on 46911" daemon prio=5 tid=48 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 8 on 4579" daemon prio=5 tid=86 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 4 on 4579" daemon prio=5 tid=82 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"DecommissionMonitor-0" daemon prio=5 tid=77 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"pool-6-thread-1"  prio=5 tid=58 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 46911" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"DecommissionMonitor-0" daemon prio=5 tid=43 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 4579" daemon prio=5 tid=79 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"process reaper" daemon prio=10 tid=9 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 9 on 29157" daemon prio=5 tid=121 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"CacheReplicationMonitor(1955777173)"  prio=5 tid=140 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
        at org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server handler 8 on 46911" daemon prio=5 tid=52 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5cfd1e1" daemon prio=5 tid=139 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3905)
        at java.lang.Thread.run(Thread.java:745)
"Socket Reader #1 for port 29157"  prio=5 tid=103 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor@518b3cb7" daemon prio=5 tid=42 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.PendingReconstructionBlocks$PendingReconstructionMonitor.run(PendingReconstructionBlocks.java:223)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@31b52301" daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"Block report processor" daemon prio=5 tid=99 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4511)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4500)
"IPC Server handler 3 on 46911" daemon prio=5 tid=47 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Signal Dispatcher" daemon prio=9 tid=4 runnable
java.lang.Thread.State: RUNNABLE
"IPC Server handler 7 on 4579" daemon prio=5 tid=85 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 2 on 4579" daemon prio=5 tid=80 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"StorageInfoMonitor" daemon prio=5 tid=31 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 29157" daemon prio=5 tid=114 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@619e65ae" daemon prio=5 tid=33 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:421)
        at java.lang.Thread.run(Thread.java:745)
"IPC Parameter Sending Thread #2" daemon prio=5 tid=128 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"172759815@qtp-314635482-0" daemon prio=5 tid=59 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
"IPC Server handler 1 on 29157" daemon prio=5 tid=113 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Socket Reader #1 for port 4579"  prio=5 tid=70 runnable
java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:848)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:827)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=127 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
        at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
        at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"ReplicationMonitor" daemon prio=5 tid=64 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:4174)
        at java.lang.Thread.run(Thread.java:745)
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@2ff9ffb2" daemon prio=5 tid=137 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3776)
        at java.lang.Thread.run(Thread.java:745)
"StorageInfoMonitor" daemon prio=5 tid=65 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$StorageInfoDefragmenter.run(BlockManager.java:4209)
        at java.lang.Thread.run(Thread.java:745)
"AsyncAppender-Dispatcher-Thread-38" daemon prio=5 tid=54 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"Timer-0" daemon prio=5 tid=27 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)


	at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:269)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.waitForLogRollInSharedDir(TestEditLogTailer.java:199)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRolls(TestEditLogTailer.java:178)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testNN1TriggersLogRolls(TestEditLogTailer.java:146)


FAILED:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot(TestOpenFilesWithSnapshot.java:82)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":50127; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":50127; 
	at sun.security.krb5.KdcComm.send(KdcComm.java:250)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:417)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
	at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:417)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1547)
	at org.apache.hadoop.ipc.Client.call(Client.java:1394)
	at org.apache.hadoop.ipc.Client.call(Client.java:1358)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy26.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:192)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:146)




Jenkins build is back to normal : Hadoop-Hdfs-trunk #3090

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3090/changes>


---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3089

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3089/changes>

Changes:

[szetszwo] HADOOP-12957. Limit the number of outstanding async calls.  Contributed

[wang] MAPREDUCE-6537. Include hadoop-pipes examples in the release tarball.

------------------------------------------
[...truncated 9944 lines...]
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.026 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 118.558 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.952 sec - in org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.351 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.954 sec - in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.269 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.668 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.771 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.727 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 393.593 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.599 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.833 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.974 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.538 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.049 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.246 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.717 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.099 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.137 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.831 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.356 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.888 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.841 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.507 sec - in org.apache.hadoop.net.TestNetworkTopology
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.425 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.09 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.838 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.534 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.797 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.473 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.073 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.961 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.245 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.165 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.408 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.532 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.936 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.53 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.1 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.965 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.398 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.671 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.346 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.278 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.999 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.999 sec - in org.apache.hadoop.cli.TestHDFSCLI
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.195 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.046 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.11 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.771 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.681 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.53 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.333 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.546 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.765 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.637 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.605 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.361 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.971 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.689 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.226 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.419 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.554 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 10.982 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.517 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.152 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.328 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.239 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 25.655 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.451 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.98 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 35.12 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.718 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestSafeModeWithStripedFile.testStripedFile0:76->doTest:156 null
  TestBlockManager.testBlockReportQueueing:1074 null
  TestNameNodeMetadataConsistency.testGenerationStampInFuture:113 expected:<17> but was:<0>
  TestDFSUpgradeWithHA.testRollbackWithJournalNodes:687 null
  TestRollingUpgrade.testRollback:351 Test resulted in an unexpected exit

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS:146->createDirectoriesSecurely:192 » IO
  TestDelegationTokenForProxyUser.testWebHdfsDoAs:170 »  test timed out after 50...
  TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite:184->internalTestConcurrentAsyncRenameWithOverwrite:221->Object.wait:-2 » 
  TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite:191->internalTestConcurrentAsyncRenameWithOverwrite:221 » 
  TestAsyncDFSRename.testAsyncRenameWithOverwrite:56 » IO Cannot remove data dir...
  TestBlockScanner.testDatanodeCursor:572 » Timeout Timed out waiting for condit...
  TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot:82 » IO Ti...
  TestEditLogTailer.testNN1TriggersLogRolls:146->testStandbyTriggersLogRolls:178->waitForLogRollInSharedDir:199 » Timeout
  TestFileTruncate.testTruncateWithDataNodesRestart:704 » Timeout Timed out wait...

Tests run: 4403, Failures: 5, Errors: 9, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:12 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:39 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.105 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:44 h
[INFO] Finished at: 2016-05-03T00:42:44+00:00
[INFO] Final Memory: 59M/804M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3088

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3088/changes>

Changes:

[cmccabe] HADOOP-13072. WindowsGetSpaceUsed constructor should be public

------------------------------------------
[...truncated 5179 lines...]
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.881 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.615 sec - in org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.332 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.348 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.441 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.435 sec - in org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.221 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.917 sec - in org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.33 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.022 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.792 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.637 sec - in org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.41 sec - in org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.599 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.589 sec - in org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.37 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.558 sec - in org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.TestGenericRefresh
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.911 sec - in org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.tracing.TestTracing
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.771 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.446 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.529 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.security.TestPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.155 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.002 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.554 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.598 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.546 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.402 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.433 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.064 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.592 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.257 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.745 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.529 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.784 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.433 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.461 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.59 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.827 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.829 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.53 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.857 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.469 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.44 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.906 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.564 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.358 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.278 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.932 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.498 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.402 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.394 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.462 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 6.243 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.368 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.836 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 162.856 sec - in org.apache.hadoop.hdfs.TestDecommission
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.453 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.664 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.391 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.684 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.23 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.349 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.295 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.323 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.839 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.9 sec - in org.apache.hadoop.tools.TestTools
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.91 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Running org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.791 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.926 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.687 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.42 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.588 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.315 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.574 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.386 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.91 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.025 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.787 sec - in org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.007 sec - in org.apache.hadoop.cli.TestHDFSCLI

Results :

Tests in error: 
  TestHFlush.testHFlushInterrupted » ClosedByInterrupt

Tests run: 4401, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:35 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [55:55 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.096 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-05-02T18:29:15+00:00
[INFO] Final Memory: 59M/858M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Hadoop-Hdfs-trunk - Build # 3088 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3088/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5372 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:35 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [55:55 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.096 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-05-02T18:29:15+00:00
[INFO] Final Memory: 59M/858M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:657)




Build failed in Jenkins: Hadoop-Hdfs-trunk #3087

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3087/changes>

Changes:

[aw] HADOOP-13012. yetus-wrapper should fail sooner when download fails

[aw] HADOOP-13045. hadoop_add_classpath is not working in .hadooprc

------------------------------------------
[...truncated 6609 lines...]
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 156.686 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.543 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.069 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.632 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.591 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.833 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 141.634 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.257 sec - in org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.859 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.746 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 234.324 sec - in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.524 sec - in org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.403 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.596 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 513.384 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.779 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.277 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 134.158 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.102 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 175.17 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.409 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.894 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.578 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.238 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.414 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.45 sec - in org.apache.hadoop.TestGenericRefresh
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.498 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.986 sec - in org.apache.hadoop.tools.TestTools
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.1 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.762 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.567 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.31 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.62 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.542 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.75 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.235 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.507 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.687 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.981 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.804 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.53 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.122 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.578 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.851 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.307 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.099 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.93 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.177 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.707 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.353 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.74 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.799 sec - in org.apache.hadoop.cli.TestHDFSCLI
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.425 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.933 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.089 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.501 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.633 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.119 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.702 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.207 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.963 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.698 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.795 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.843 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.28 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.782 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.966 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.291 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 11.681 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.081 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.348 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.104 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.756 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 29.733 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.216 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.685 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 34.717 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 143.466 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten:609->testRemoveVolumeBeingWrittenForDatanode:694 expected:<0> but was:<10506638115>
  TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure:412 There is no under replicated block after volume failure
  TestNamenodeCapacityReport.testXceiverCount:280->checkClusterHealth:320 expected:<26.0> but was:<27.0>

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS:146->createDirectoriesSecurely:206 » IO
  TestFsDatasetImpl.testRemoveVolumeBeingWritten:630 »  test timed out after 300...
  TestHFlush.testHFlushInterrupted » ClosedByInterrupt
  TestSecureNNWithQJM.testSecureMode:167->doNNWithQJMTest:193->restartNameNode:205 » 

Tests run: 4401, Failures: 3, Errors: 4, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:22 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:39 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.119 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:44 h
[INFO] Finished at: 2016-04-30T21:33:27+00:00
[INFO] Final Memory: 58M/795M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3086

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3086/changes>

Changes:

[raviprak] HADOOP-12563. Updated utility (dtutil) to create/modify token files.

------------------------------------------
[...truncated 6631 lines...]
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.046 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 141.16 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.885 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.023 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.919 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.522 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.205 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.994 sec - in org.apache.hadoop.TestGenericRefresh
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.956 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.39 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.473 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.12 sec - in org.apache.hadoop.tools.TestJMXGet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.738 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.043 sec - in org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.222 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.317 sec - in org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.953 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.056 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.287 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.703 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.52 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.299 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.206 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.2 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.063 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.373 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.92 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.143 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.972 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.058 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.972 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.787 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.134 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.95 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.514 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.031 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.441 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.991 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.482 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.31 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.742 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.614 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.557 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.667 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.449 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.117 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.775 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.099 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.053 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.629 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 12.426 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.702 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.451 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.746 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.426 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 29.212 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.077 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.151 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 36.209 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.991 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestWebHdfsTimeouts.testAuthUrlReadTimeout:198 Expected to find 'localhost:44853: Read timed out' but got unexpected exception:java.net.SocketTimeoutException: localhost:44853: null
	at java.net.SocksSocketImpl.remainingMillis(SocksSocketImpl.java:111)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:579)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:684)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:637)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:709)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1458)
	at org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testAuthUrlReadTimeout(TestWebHdfsTimeouts.java:195)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

  TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure:412 There is no under replicated block after volume failure
  TestNameNodeMetadataConsistency.testGenerationStampInFuture:113 expected:<17> but was:<0>
  TestRollingUpgrade.testRollback:351 Test resulted in an unexpected exit
  TestDataTransferKeepalive.testClientResponsesKeepAliveTimeout:144->assertXceiverCount:250 Expected 0 xceivers, found 2

Tests in error: 
  TestDataNodeMultipleRegistrations.testClusterIdMismatchAtStartupWithHA:253 »  ...

Tests run: 4401, Failures: 5, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:39 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:41 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.145 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:47 h
[INFO] Finished at: 2016-04-30T07:36:16+00:00
[INFO] Final Memory: 66M/878M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3085

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3085/changes>

Changes:

[kihwal] HDFS-10347. Namenode report bad block method doesn't log the bad block

------------------------------------------
[...truncated 6053 lines...]
Running org.apache.hadoop.hdfs.TestRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.769 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.639 sec - in org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.425 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.949 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.957 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.307 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.176 sec - in org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.633 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.95 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.021 sec - in org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.594 sec - in org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.777 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.623 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.976 sec - in org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.681 sec - in org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.696 sec - in org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.TestGenericRefresh
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.9 sec - in org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.tracing.TestTracing
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.993 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.632 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.349 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.193 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.277 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.401 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.52 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.552 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 13.82 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.682 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.98 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.737 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.067 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.762 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.901 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.025 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.948 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.794 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.225 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.386 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.889 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.803 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.77 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.309 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.927 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.627 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.549 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.561 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.987 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.213 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.65 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.521 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.573 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.458 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 6.328 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 162.049 sec - in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.51 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.06 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.603 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.969 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.033 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.899 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.353 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.216 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.268 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.454 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.981 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.609 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.943 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.839 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.382 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.882 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.58 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.764 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.74 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.973 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.12 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.958 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.825 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.28 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.082 sec - in org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.794 sec - in org.apache.hadoop.cli.TestHDFSCLI

Results :

Tests in error: 
  TestDFSClientRetries.testNamenodeRestart:953->namenodeRestartTest:1087 »  test...

Tests run: 4401, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:05 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [56:48 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.098 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-04-29T23:06:41+00:00
[INFO] Final Memory: 58M/755M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3084

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3084/changes>

Changes:

[szetszwo] HDFS-10335 Mover$Processor#chooseTarget() always chooses the first

[cnauroth] MAPREDUCE-6672. TestTeraSort fails on Windows. Contributed by Tibor

------------------------------------------
[...truncated 6532 lines...]
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.704 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.786 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.975 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.235 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.666 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.076 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.47 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.765 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.051 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.512 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.498 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.206 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.794 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.356 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.077 sec - in org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.331 sec - in org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.157 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.297 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.474 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.603 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.975 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.824 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.832 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.279 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.43 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.698 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.665 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.875 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.301 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.055 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.228 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.293 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.954 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.893 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.711 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.482 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.727 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.189 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.641 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.237 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.351 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.229 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.941 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.78 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.508 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.263 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.4 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 9.443 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.298 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.301 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.901 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.453 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 22.209 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.156 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.359 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 27.578 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 118.106 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestSafeModeWithStripedFile.testStripedFile0:76->doTest:156 null
  TestFsDatasetImpl.testCleanShutdownOfVolume:686 Expected to find 'DatanodeInfoWithStorage[127.0.0.1:56359,DS-e7dc216b-bed0-42ad-9476-78920d53c6a9,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:56359,DS-9433b7b6-78c3-4d63-9e17-9710de9b21ba,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)

  TestDecommissioningStatus.testDecommissionStatus:302->checkDecommissionStatus:196 Unexpected num under-replicated blocks expected:<4> but was:<3>
  TestEditLog.testBatchedSyncWithClosedLogs:594 logging edit without syncing should do not affect txid expected:<1> but was:<2>
  TestMissingBlocksAlert.testMissingBlocksAlert:119 expected:<2> but was:<4>

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS:146->createDirectoriesSecurely:206 » IO
  TestOpenFilesWithSnapshot.testOpenFilesWithRename:210 » IO Timed out waiting f...
  TestFsck.testStoragePoliciesCK:1667->runFsck:152 » NoClassDefFound org/apache/...
  TestFsck.testToCheckTheFsckCommandOnIllegalArguments:1116->runFsck:152 » NoClassDefFound
  TestFsck.testUnderMinReplicatedBlock:860->runFsck:152 » NoClassDefFound org/ap...
  TestFsck.testFsckMoveAndDelete:563->runFsck:152 » NoClassDefFound org/apache/h...
  TestFsck.testFsckSymlink:1366->runFsck:152 » NoClassDefFound org/apache/hadoop...
  TestFsck.testFsck:182->runFsck:152 » NoClassDefFound org/apache/hadoop/securit...
  TestFsck.testFsckForSnapshotFiles:1388->runFsck:152 » NoClassDefFound org/apac...
  TestFsck.testBlockIdCKDecommission:1510->runFsck:152 » NoClassDefFound org/apa...
  TestFsck.testCorruptBlock:783->runFsck:152 » NoClassDefFound org/apache/hadoop...
  TestFsck.testFsckMove:352->runFsck:152 » NoClassDefFound org/apache/hadoop/sec...
  TestFsck.testFsckMoveAfterCorruption:1944->runFsck:152 » NoClassDefFound org/a...
  TestFsck.testFsckListCorruptFilesBlocks:1048->runFsck:152 » NoClassDefFound or...
  TestFsck.testBlockIdCKCorruption:1605->runFsck:152 » NoClassDefFound org/apach...
  TestFsck.testFsckError:1017->runFsck:152 » NoClassDefFound org/apache/hadoop/s...
  TestAddStripedBlocks.setup:82 » NoClassDefFound org/apache/hadoop/util/Shutdow...
  TestAddStripedBlocks.setup:82 » NoClassDefFound org/apache/hadoop/util/Shutdow...
  TestAddStripedBlocks.setup:82 » NoClassDefFound org/apache/hadoop/util/Shutdow...
  TestAddStripedBlocks.setup:82 » NoClassDefFound org/apache/hadoop/util/Shutdow...
  TestAddStripedBlocks.setup:82 » NoClassDefFound org/apache/hadoop/util/Shutdow...
  TestAddStripedBlocks.setup:82 » NoClassDefFound org/apache/hadoop/util/Shutdow...
  TestFsck.testFsckNonExistent:272->runFsck:152 » NoClassDefFound org/apache/had...
  TestFsck.testFsckWithDecommissionedReplicas:1725->runFsck:152 » NoClassDefFound
  TestFsck.testFsckReplicaDetails:939->runFsck:152 » NoClassDefFound org/apache/...
  TestFsck.testECFsck:1799 » NoClassDefFound org/apache/hadoop/io/erasurecode/Co...
  TestFsck.testFsckOpenECFiles:702 » NoClassDefFound org/apache/hadoop/io/erasur...
  TestFsck.testBlockIdCK:1451->runFsck:152 » NoClassDefFound org/apache/hadoop/s...
  TestFsck.testFsckListCorruptSnapshotFiles:1863->runFsck:152 » NoClassDefFound ...
  TestFsck.testFsckPermission:304 » NoClassDefFound org/apache/hadoop/security/S...
  TestFsck.testFsckOpenFiles:637->runFsck:152 » NoClassDefFound org/apache/hadoo...
  TestEditLogTailer.testNN1TriggersLogRolls:146->testStandbyTriggersLogRolls:178->waitForLogRollInSharedDir:199 » Timeout
  TestHFlush.testHFlushInterrupted » ClosedByInterrupt
  TestFileAppend.testMultipleAppends » IO Failed to replace a bad datanode on th...

Tests run: 4401, Failures: 5, Errors: 34, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:55 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:29 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.106 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:35 h
[INFO] Finished at: 2016-04-29T20:52:22+00:00
[INFO] Final Memory: 58M/819M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3083

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3083/changes>

Changes:

[rkanter] Remove parent's env vars from child processes

------------------------------------------
[...truncated 5260 lines...]
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.126 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.446 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.279 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.812 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.224 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 207.299 sec - in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.725 sec - in org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.787 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.479 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.749 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 403.237 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.556 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.915 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.957 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.479 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.223 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.376 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.451 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.557 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.022 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.812 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.672 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.901 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.881 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.725 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.643 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.599 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.833 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.982 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.043 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.045 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.843 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.298 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.955 sec - in org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.27 sec - in org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.795 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.572 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.015 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.662 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.352 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.341 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.405 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.778 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.129 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.599 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.319 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.206 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.348 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.258 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.694 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.356 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.227 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.085 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.507 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.894 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.037 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.765 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.311 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.484 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.554 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.382 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.351 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.321 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.729 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.913 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.735 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.961 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 10.909 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.446 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.477 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.734 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 24.528 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.98 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.423 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.643 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 30.64 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 125.04 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestPendingInvalidateBlock.testPendingDeletion:92 expected:<2> but was:<1>
  TestNamenodeCapacityReport.testXceiverCount:280->checkClusterHealth:320 expected:<20.0> but was:<21.0>
  TestRollingUpgrade.testRollback:351 Test resulted in an unexpected exit

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS:95->createDirectoriesSecurely:192 » IO
  TestOpenFilesWithSnapshot.testOpenFilesWithRename:210 » IO Timed out waiting f...

Tests run: 4401, Failures: 3, Errors: 2, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:01 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:29 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.103 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:34 h
[INFO] Finished at: 2016-04-29T18:58:52+00:00
[INFO] Final Memory: 58M/792M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3082

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3082/changes>

Changes:

[kihwal] HDFS-10260. TestFsDatasetImpl#testCleanShutdownOfVolume often fails.

------------------------------------------
[...truncated 5303 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.65 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.399 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.71 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.848 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 201.911 sec - in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.744 sec - in org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.498 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 374.4 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.261 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.394 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.619 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.831 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.262 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.581 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.267 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 158.212 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.043 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.523 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.708 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.544 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.666 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.674 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.096 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.28 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.324 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.588 sec - in org.apache.hadoop.net.TestNetworkTopology
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.83 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.078 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.557 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.848 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.353 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.272 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.544 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.814 sec - in org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.168 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.604 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.364 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.486 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.262 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.955 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.145 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.051 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.214 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.046 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.16 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.884 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.877 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.296 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.975 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.705 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.534 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.922 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.501 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.73 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.733 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.82 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.245 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.974 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.883 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.563 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.596 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.643 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.607 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.857 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.041 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.696 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 10.883 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.339 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.416 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.414 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.489 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 25.762 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.007 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.086 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 31.672 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.622 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestReconstructStripedBlocksWithRackAwareness.testReconstructForNotEnoughRacks:153 expected:<0> but was:<1>
  TestDirectoryScanner.testThrottling:701 Throttle is too permissive
  TestDFSUpgradeWithHA.testRollbackWithJournalNodes:687 null
  TestRollingUpgrade.testCheckpointWithMultipleNN:568->testCheckpoint:604 Test resulted in an unexpected exit
  TestRollingUpgrade.testDFSAdminRollingUpgradeCommands:102->checkMxBeanIsNull:295 expected null, but was:<javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo$Bean,items=((itemName=blockPoolId,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=createdRollbackImages,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=finalizeTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=startTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={blockPoolId=BP-1681401862-67.195.81.153-1461949367675, createdRollbackImages=true, finalizeTime=0, startTime=1461949370752})>
  TestRollingUpgrade.testRollback:324->checkMxBeanIsNull:295 expected null, but was:<javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo$Bean,items=((itemName=blockPoolId,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=createdRollbackImages,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=finalizeTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=startTime,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)))),contents={blockPoolId=BP-1681401862-67.195.81.153-1461949367675, createdRollbackImages=true, finalizeTime=0, startTime=1461949370752})>

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS:95->createDirectoriesSecurely:206 » IO
  TestDecommissioningStatus.testDecommissionDeadDN:414 » Bind Problem binding to...

Tests run: 4401, Failures: 6, Errors: 2, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:48 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:27 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.150 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:32 h
[INFO] Finished at: 2016-04-29T17:21:11+00:00
[INFO] Final Memory: 60M/876M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3081

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3081/changes>

Changes:

[vvasudev] YARN-3998. Add support in the NodeManager to re-launch containers.

[iwasakims] HADOOP-12378. Fix findbugs warnings in hadoop-tools module. Contributed

------------------------------------------
[...truncated 8014 lines...]
        at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@4f6815ff" daemon prio=5 tid=21 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Timer-16" daemon prio=5 tid=195 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"pool-12-thread-1"  prio=5 tid=116 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
        at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 3 on 60169" daemon prio=5 tid=120 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.301 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.793 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestCrcCorruption
testCorruptionDuringWrt(org.apache.hadoop.hdfs.TestCrcCorruption)  Time elapsed: 50.227 sec  <<< ERROR!
java.lang.Exception: test timed out after 50000 milliseconds
	at java.lang.Object.wait(Native Method)
	at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:136)

        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchro
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.644 sec - in org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.347 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.131 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.054 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.589 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.149 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.075 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.489 sec - in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.755 sec - in org.apache.hadoop.hdfs.TestHFlush
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.174 sec - in org.apache.hadoop.hdfs.TestDisableConnCache
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.183 sec - in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.43 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.662 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.828 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.144 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.504 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.58 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.321 sec - in org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.247 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.759 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.679 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.458 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.652 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.714 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestAclsEndToEnd
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.569 sec - in org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 156.964 sec - in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.106 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.362 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.534 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.177 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.965 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.87 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.353 sec - in org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Running org.apache.hadoop.hdfs.TestRead
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.163 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.061 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.651 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.684 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.565 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.305 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.444 sec - in org.apache.hadoop.hdfs.TestAclsEndToEnd
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.854 sec - in org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.144 sec - in org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.466 sec - in org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.094 sec - in org.apache.hadoop.hdfs.TestBlockMissingException
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.765 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.845 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.356 sec - in org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.422 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.311 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.845 sec - in org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestClose
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.83 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.523 sec - in org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.395 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.054 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.tracing.TestTracing
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.637 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.601 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.337 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.172 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.82 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.158 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120

Results :

Tests in error: 
  TestCrcCorruption.testCorruptionDuringWrt:136->Object.wait:-2 »  test timed ou...

Tests run: 4394, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:54 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [59:09 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.127 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:03 h
[INFO] Finished at: 2016-04-29T12:50:35+00:00
[INFO] Final Memory: 75M/589M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Hadoop-Hdfs-trunk - Build # 3081 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3081/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8207 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:54 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [59:09 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.127 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:03 h
[INFO] Finished at: 2016-04-29T12:50:35+00:00
[INFO] Final Memory: 75M/589M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

Error Message:
test timed out after 50000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 50000 milliseconds
	at java.lang.Object.wait(Native Method)
	at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:136)




Build failed in Jenkins: Hadoop-Hdfs-trunk #3080

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3080/changes>

Changes:

[jianhe] YARN-5009. NMLeveldbStateStoreService database can grow substantially

------------------------------------------
[...truncated 8615 lines...]
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 219.806 sec - in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.726 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 465.695 sec - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.971 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.809 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.074 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 148.258 sec - in org.apache.hadoop.hdfs.TestFileCreation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.181 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.376 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.602 sec - in org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.005 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.241 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 332.872 sec - in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.225 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 189.812 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.325 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.562 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.745 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.736 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 238.113 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.323 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.89 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.196 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.843 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.059 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.339 sec - in org.apache.hadoop.net.TestNetworkTopology
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.88 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.737 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 170.603 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.419 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.67 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.498 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.56 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.198 sec - in org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.352 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.602 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.488 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.458 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.666 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.783 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.462 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.397 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.608 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.982 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.983 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.978 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.67 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.143 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.636 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.162 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.743 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.624 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.891 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.213 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.085 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.009 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.717 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.51 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.958 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.282 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.616 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.434 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.133 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.376 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.876 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.569 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.801 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.091 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.272 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 20.31 sec - in org.apache.hadoop.fs.TestGlobPaths
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.855 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.631 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.673 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 38.032 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.112 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.146 sec - in org.apache.hadoop.fs.TestUnbuffer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 38.148 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 173.412 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations

Results :

Failed tests: 
  TestSafeModeWithStripedFile.testStripedFile0:76->doTest:156 null
  TestReconstructStripedBlocksWithRackAwareness.testReconstructForNotEnoughRacks:153 expected:<0> but was:<1>
  TestDecommissioningStatus.testDecommissionStatus:302->checkDecommissionStatus:196 Unexpected num under-replicated blocks expected:<4> but was:<3>

Tests in error: 
  TestUnderReplicatedBlocks.testSetrepIncWithUnderReplicatedBlocks:70 »  test ti...
  TestEncryptionZones.testStartFileRetry:1067 »  test timed out after 120000 mil...
  TestSecureNNWithQJM.shutdown:156 »  test timed out after 30000 milliseconds

Tests run: 4401, Failures: 3, Errors: 3, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:28 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:49 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.129 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:54 h
[INFO] Finished at: 2016-04-29T08:29:01+00:00
[INFO] Final Memory: 59M/906M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Hadoop-Hdfs-trunk - Build # 3080 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3080/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8808 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:28 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:49 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.129 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:54 h
[INFO] Finished at: 2016-04-29T08:29:01+00:00
[INFO] Final Memory: 59M/906M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
6 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestEncryptionZones.testStartFileRetry

Error Message:
test timed out after 120000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 120000 milliseconds
	at sun.misc.Unsafe.park(Native Method)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
	at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
	at org.apache.hadoop.hdfs.TestEncryptionZones.testStartFileRetry(TestEncryptionZones.java:1067)


FAILED:  org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.testStripedFile0

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertFalse(Assert.java:64)
	at org.junit.Assert.assertFalse(Assert.java:74)
	at org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.doTest(TestSafeModeWithStripedFile.java:156)
	at org.apache.hadoop.hdfs.TestSafeModeWithStripedFile.testStripedFile0(TestSafeModeWithStripedFile.java:76)


FAILED:  org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM.testSecureMode

Error Message:
test timed out after 30000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.mortbay.thread.QueuedThreadPool.doStop(QueuedThreadPool.java:435)
	at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
	at org.mortbay.jetty.Server.doStop(Server.java:291)
	at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
	at org.apache.hadoop.http.HttpServer2.stop(HttpServer2.java:962)
	at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:257)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1872)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1932)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1908)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM.shutdown(TestSecureNNWithQJM.java:156)


FAILED:  org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.testReconstructForNotEnoughRacks

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness.testReconstructForNotEnoughRacks(TestReconstructStripedBlocksWithRackAwareness.java:153)


FAILED:  org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetrepIncWithUnderReplicatedBlocks

Error Message:
test timed out after 60000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 60000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.fs.shell.SetReplication.waitForReplication(SetReplication.java:127)
	at org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:77)
	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
	at org.apache.hadoop.fs.shell.Command.run(Command.java:166)
	at org.apache.hadoop.fs.FsShell.run(FsShell.java:319)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testSetrepIncWithUnderReplicatedBlocks(TestUnderReplicatedBlocks.java:70)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus

Error Message:
Unexpected num under-replicated blocks expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: Unexpected num under-replicated blocks expected:<4> but was:<3>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
	at org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:302)




Build failed in Jenkins: Hadoop-Hdfs-trunk #3079

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3079/changes>

Changes:

[jianhe] YARN-5008. LeveldbRMStateStore database can grow substantially leading

------------------------------------------
[...truncated 6447 lines...]
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"nioEventLoopGroup-18-24"  prio=10 tid=1541 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 7 on 34658" daemon prio=5 tid=1298 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"nioEventLoopGroup-18-14"  prio=10 tid=1531 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 57244" daemon prio=5 tid=1325 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" daemon prio=5 tid=66 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
        at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3230)
        at java.lang.Thread.run(Thread.java:745)
"nioEventLoopGroup-17-4"  prio=10 tid=1489 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"Thread-905"  prio=5 tid=1431 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:768)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:741)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
        at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:1007)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
        at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:388)
        at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
        at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
        at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
        at org.apache.hadoop.hdfs.TestDFSClientRetries$6.run(TestDFSClientRetries.java:1061)
        at java.lang.Thread.run(Thread.java:745)
"VolumeScannerThread(<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1)"> daemon prio=5 tid=1317 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"nioEventLoopGroup-18-16"  prio=10 tid=1533 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 49785" daemon prio=5 tid=1370 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server idle connection scanner for port 57244" daemon prio=5 tid=1315 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"AsyncAppender-Dispatcher-Thread-102" daemon prio=5 tid=138 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
        at java.lang.Thread.run(Thread.java:745)
"nioEventLoopGroup-18-23"  prio=10 tid=1540 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 2 on 57244" daemon prio=5 tid=1327 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"nioEventLoopGroup-18-28"  prio=10 tid=1545 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:614)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:361)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)
"Thread-730"  prio=5 tid=1223 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 336.22 sec <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHDFS
testNamenodeRestart(org.apache.hadoop.hdfs.web.TestWebHDFS)  Time elapsed: 300.037 sec  <<< ERROR!
java.lang.Exception: test timed out after 300000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:768)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:741)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:555)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:586)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1753)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:582)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:962)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:977)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:1087)
	at org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:264)

        at org.mortbay.thread
Running org.apache.hadoop.TestGenericRefresh
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.898 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.34 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.578 sec - in org.apache.hadoop.TestGenericRefresh
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.621 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs

Results :

Tests in error: 
  TestEncryptedTransfer.testLongLivedWriteClientAfterRestart:417 » Bind Port in ...
  TestFileCreationDelete.testFileCreationDeleteParent:75 » Bind Problem binding ...
  TestHFlush.testHFlushInterrupted » ClosedByInterrupt
  TestWebHDFS.testNamenodeRestart:264 »  test timed out after 300000 millisecond...

Tests run: 4401, Failures: 0, Errors: 4, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:00 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [55:03 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.106 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 59:05 min
[INFO] Finished at: 2016-04-29T05:46:52+00:00
[INFO] Final Memory: 68M/871M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3078

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3078/changes>

Changes:

[kihwal] HDFS-9958. BlockManager#createLocatedBlocks can throw NPE for

------------------------------------------
[...truncated 5238 lines...]
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.688 sec - in org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.415 sec - in org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.77 sec - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.233 sec - in org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.396 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.058 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.891 sec - in org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.702 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.868 sec - in org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.802 sec - in org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.391 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.4 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.349 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.614 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.102 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.425 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.188 sec - in org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.279 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.791 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.668 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.175 sec - in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.118 sec - in org.apache.hadoop.hdfs.TestDisableConnCache
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 31.074 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestHFlush
testHFlushInterrupted(org.apache.hadoop.hdfs.TestHFlush)  Time elapsed: 0.847 sec  <<< ERROR!
java.nio.channels.ClosedByInterruptException: null
	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:657)

Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.913 sec - in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.951 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.282 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.985 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.836 sec - in org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.799 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.571 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.106 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.223 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.526 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.715 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.389 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.933 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.958 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestAclsEndToEnd
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.43 sec - in org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.212 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 134.231 sec - in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.182 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.777 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.256 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.417 sec - in org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.706 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestRead
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.133 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.044 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.594 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.453 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.033 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.073 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.967 sec - in org.apache.hadoop.hdfs.TestAclsEndToEnd
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.447 sec - in org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.323 sec - in org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.759 sec - in org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.222 sec - in org.apache.hadoop.hdfs.TestBlockMissingException
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.425 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.157 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.13 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.328 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.101 sec - in org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.733 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.913 sec - in org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.341 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.016 sec - in org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.949 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.tracing.TestTracing
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.211 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.626 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.411 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.081 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.248 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.118 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem

Results :

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS:95->createDirectoriesSecurely:206 » IO
  TestHFlush.testHFlushInterrupted » ClosedByInterrupt

Tests run: 4401, Failures: 0, Errors: 2, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:01 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [54:12 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.099 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 58:15 min
[INFO] Finished at: 2016-04-28T22:40:12+00:00
[INFO] Final Memory: 59M/892M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results