You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/05/06 04:01:24 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #3112

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3112/changes>

Changes:

[vinodkv] MAPREDUCE-6514. Fixed MapReduce ApplicationMaster to properly updated

------------------------------------------
[...truncated 5344 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.601 sec - in org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.745 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.96 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.106 sec - in org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.702 sec - in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.226 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.666 sec - in org.apache.hadoop.hdfs.server.mover.TestStorageMover
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.533 sec - in org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.101 sec - in org.apache.hadoop.hdfs.TestFSInputChecker
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.361 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.499 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.448 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.102 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 37, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 6.363 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.247 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.569 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.177 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.07 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.614 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 13.622 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.542 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.739 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.197 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.178 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.677 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.63 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.302 sec - in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.58 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.53 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.836 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.826 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.741 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 308.729 sec - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.88 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.107 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.596 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.469 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.54 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.524 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.966 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.041 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.879 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.411 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.866 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.2 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.336 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.619 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.387 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.255 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.056 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.625 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.316 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.056 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.302 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.335 sec - in org.apache.hadoop.tools.TestTools
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.454 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs
Running org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.042 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tracing.TestTracing
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.831 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.792 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.85 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.229 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.TestGenericRefresh
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.766 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.031 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.199 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.024 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.587 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.609 sec - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.955 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.63 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.175 sec - in org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.707 sec - in org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.845 sec - in org.apache.hadoop.cli.TestHDFSCLI

Results :

Failed tests: 
  TestEditLog.testBatchedSyncWithClosedLogs:594 logging edit without syncing should do not affect txid expected:<1> but was:<2>

Tests in error: 
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestSnapshotDiffReport.setUp:64 » NoClassDefFound org/apache/hadoop/util/Shutd...
  TestRenameWithSnapshots.testRenameUndo_4:1529 » NoClassDefFound org/apache/had...
  TestRenameWithSnapshots.testRenameUndo_6:1652 » NoClassDefFound org/apache/had...
  TestRenameWithSnapshots.testRenameUndo_7:1715 » NoClassDefFound org/apache/had...
  TestRenameWithSnapshots.testRenameAndUpdateSnapshottableDirs:1110 » NoClassDefFound
  TestRenameWithSnapshots.testRenameFromSDir2NonSDir:142 » NoClassDefFound org/a...

Tests run: 4414, Failures: 1, Errors: 13, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:09 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [56:03 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.101 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-05-06T04:01:12+00:00
[INFO] Final Memory: 57M/784M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Hadoop-Hdfs-trunk - Build # 3113 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3113/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11345 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:15 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:16 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.153 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:21 h
[INFO] Finished at: 2016-05-06T08:15:07+00:00
[INFO] Final Memory: 57M/828M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx2048m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7072230321739974664.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire1440593190072320849tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_5326508163860081582313tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
35 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite

Error Message:
test timed out after 120000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 120000 milliseconds
	at java.lang.Object.wait(Native Method)
	at java.lang.Object.wait(Object.java:503)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1477)
	at org.apache.hadoop.ipc.Client.call(Client.java:1436)
	at org.apache.hadoop.ipc.Client.call(Client.java:1358)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy21.complete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:465)
	at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy23.complete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:806)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:784)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:221)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:184)


FAILED:  org.apache.hadoop.hdfs.TestAsyncDFSRename.testCallGetReturnValueMultipleTimes

Error Message:
Cannot remove data directory: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/datapath '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk': 
 absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk
 permissions: drwx
path '/home/jenkins/jenkins-slave/workspace': 
 absolute:/home/jenkins/jenkins-slave/workspace
 permissions: drwx
path '/home/jenkins/jenkins-slave': 
 absolute:/home/jenkins/jenkins-slave
 permissions: drwx
path '/home/jenkins': 
 absolute:/home/jenkins
 permissions: drwx
path '/home': 
 absolute:/home
 permissions: dr-x
path '/': 
 absolute:/
 permissions: dr-x


Stack Trace:
java.io.IOException: Cannot remove data directory: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/datapath '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk': 
	absolute:/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk
	permissions: drwx
path '/home/jenkins/jenkins-slave/workspace': 
	absolute:/home/jenkins/jenkins-slave/workspace
	permissions: drwx
path '/home/jenkins/jenkins-slave': 
	absolute:/home/jenkins/jenkins-slave
	permissions: drwx
path '/home/jenkins': 
	absolute:/home/jenkins
	permissions: drwx
path '/home': 
	absolute:/home
	permissions: dr-x
path '/': 
	absolute:/
	permissions: dr-x

	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:834)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.testCallGetReturnValueMultipleTimes(TestAsyncDFSRename.java:124)


FAILED:  org.apache.hadoop.hdfs.TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite

Error Message:
test timed out after 60000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 60000 milliseconds
	at java.lang.Object.wait(Native Method)
	at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:220)
	at org.apache.hadoop.hdfs.TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite(TestAsyncDFSRename.java:191)


FAILED:  org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

Error Message:
test timed out after 50000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 50000 milliseconds
	at java.lang.Object.wait(Native Method)
	at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:136)


FAILED:  org.apache.hadoop.hdfs.TestDFSInputStream.testSkipWithRemoteBlockReader

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestDFSInputStream.testSkipWithRemoteBlockReader(TestDFSInputStream.java:77)


FAILED:  org.apache.hadoop.hdfs.TestDFSInputStream.testSeekToNewSource

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestDFSInputStream.testSeekToNewSource(TestDFSInputStream.java:120)


FAILED:  org.apache.hadoop.hdfs.TestDFSInputStream.testSkipWithRemoteBlockReader2

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestDFSInputStream.testSkipWithRemoteBlockReader2(TestDFSInputStream.java:88)


FAILED:  org.apache.hadoop.hdfs.TestDFSInputStream.testSkipWithLocalBlockReader

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestDFSInputStream.testSkipWithLocalBlockReader(TestDFSInputStream.java:106)


FAILED:  org.apache.hadoop.hdfs.TestEncryptionZones.testStartFileRetry

Error Message:
test timed out after 120000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 120000 milliseconds
	at sun.misc.Unsafe.park(Native Method)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
	at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
	at org.apache.hadoop.hdfs.TestEncryptionZones.testStartFileRetry(TestEncryptionZones.java:1176)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:46118,DS-649dbed4-98d3-48a9-9c5d-e529cba7f7d0,DISK], DatanodeInfoWithStorage[127.0.0.1:50681,DS-3e5e08ca-e8e0-4b6c-80fb-b78ebddf43c0,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:46118,DS-649dbed4-98d3-48a9-9c5d-e529cba7f7d0,DISK], DatanodeInfoWithStorage[127.0.0.1:50681,DS-3e5e08ca-e8e0-4b6c-80fb-b78ebddf43c0,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:46118,DS-649dbed4-98d3-48a9-9c5d-e529cba7f7d0,DISK], DatanodeInfoWithStorage[127.0.0.1:50681,DS-3e5e08ca-e8e0-4b6c-80fb-b78ebddf43c0,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:46118,DS-649dbed4-98d3-48a9-9c5d-e529cba7f7d0,DISK], DatanodeInfoWithStorage[127.0.0.1:50681,DS-3e5e08ca-e8e0-4b6c-80fb-b78ebddf43c0,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1236)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1427)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1342)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:603)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend4.testUpdateNeededReplicationsForAppendedFile

Error Message:
Timed out waiting for /testAppend to reach 2 replicas

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for /testAppend to reach 2 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:771)
	at org.apache.hadoop.hdfs.TestFileAppend4.testUpdateNeededReplicationsForAppendedFile(TestFileAppend4.java:315)


FAILED:  org.apache.hadoop.hdfs.TestFileCorruption.testArrayOutOfBoundsException

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestFileCorruption.testArrayOutOfBoundsException(TestFileCorruption.java:136)


FAILED:  org.apache.hadoop.hdfs.TestFileCorruption.testCorruptionWithDiskFailure

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestFileCorruption.testCorruptionWithDiskFailure(TestFileCorruption.java:185)


FAILED:  org.apache.hadoop.hdfs.TestFileCorruption.testFileCorruption

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/server/AuthenticationFilter
	at org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:454)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:346)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:109)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:862)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:924)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:903)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
	at org.apache.hadoop.hdfs.TestFileCorruption.testFileCorruption(TestFileCorruption.java:79)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadZeroBytesNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@5ca969ac

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@5ca969ac
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadZeroBytesNoChecksum(TestBlockReaderLocal.java:695)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderSimpleReads

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@e49838f

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@e49838f
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderSimpleReads(TestBlockReaderLocal.java:258)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferFastLaneReadsNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@75a899ad

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@75a899ad
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferFastLaneReadsNoChecksum(TestBlockReaderLocal.java:430)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalWithMlockChanges

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@985e5b9

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@985e5b9
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalWithMlockChanges(TestBlockReaderLocal.java:587)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadCorrupt

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@3cbc2693

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@3cbc2693
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadCorrupt(TestBlockReaderLocal.java:535)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalArrayReads2

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@56088977

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@56088977
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalArrayReads2(TestBlockReaderLocal.java:308)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadCorruptStart

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@36ad4810

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@36ad4810
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadCorruptStart(TestBlockReaderLocal.java:484)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalOnFileWithoutChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@67f56c9a

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@67f56c9a
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalOnFileWithoutChecksum(TestBlockReaderLocal.java:660)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadZeroBytes

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@551a9433

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@551a9433
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadZeroBytes(TestBlockReaderLocal.java:688)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadCorruptNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@3116c68f

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@3116c68f
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalReadCorruptNoChecksum(TestBlockReaderLocal.java:542)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferReadsNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@4105cc5d

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@4105cc5d
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferReadsNoChecksum(TestBlockReaderLocal.java:359)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalWithMlockChangesNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@48d2cb06

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@48d2cb06
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalWithMlockChangesNoChecksum(TestBlockReaderLocal.java:594)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalArrayReads2NoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@4749ca33

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@4749ca33
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalArrayReads2NoChecksum(TestBlockReaderLocal.java:315)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferFastLaneReads

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@5717202e

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@5717202e
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferFastLaneReads(TestBlockReaderLocal.java:423)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderSimpleReadsNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@18f77652

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@18f77652
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderSimpleReadsNoChecksum(TestBlockReaderLocal.java:270)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferReads

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@63df00db

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@63df00db
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalByteBufferReads(TestBlockReaderLocal.java:352)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalOnFileWithoutChecksumNoChecksum

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@12945396

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@12945396
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderLocalOnFileWithoutChecksumNoChecksum(TestBlockReaderLocal.java:667)


FAILED:  org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderSimpleReadsShortReadahead

Error Message:
Premature EOF reading from org.apache.hadoop.net.SocketInputStream@476fd7c7

Stack Trace:
java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@476fd7c7
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.readNextPacket(BlockReaderRemote2.java:187)
	at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.read(BlockReaderRemote2.java:144)
	at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:800)
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:879)
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:939)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:984)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:202)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:168)
	at org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testBlockReaderSimpleReadsShortReadahead(TestBlockReaderLocal.java:264)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes(TestDFSUpgradeWithHA.java:687)


FAILED:  org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache.testDataXceiverCleansUpSlotsOnFailure

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache$17.accept(TestShortCircuitCache.java:633)
	at org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.visit(ShortCircuitRegistry.java:403)
	at org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache.checkNumberOfSegmentsAndSlots(TestShortCircuitCache.java:628)
	at org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache.testDataXceiverCleansUpSlotsOnFailure(TestShortCircuitCache.java:682)


FAILED:  org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS

Error Message:
Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf903.gq1.ygridcore.net/67.195.81.147"; destination host is: "localhost":52562; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Cannot get a KDC reply)]; Host Details : local host is: "asf903.gq1.ygridcore.net/67.195.81.147"; destination host is: "localhost":52562; 
	at sun.security.krb5.KdcComm.send(KdcComm.java:250)
	at sun.security.krb5.KdcComm.send(KdcComm.java:191)
	at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187)
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311)
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115)
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449)
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411)
	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:417)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
	at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:417)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1547)
	at org.apache.hadoop.ipc.Client.call(Client.java:1394)
	at org.apache.hadoop.ipc.Client.call(Client.java:1358)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
	at com.sun.proxy.$Proxy25.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:569)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy27.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079)
	at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1908)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:206)
	at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:146)




Hadoop-Hdfs-trunk - Build # 3114 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3114/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5383 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:03 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [56:15 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.118 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-05-06T23:51:30+00:00
[INFO] Final Memory: 59M/853M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
Expected to find 'DatanodeInfoWithStorage[127.0.0.1:50287,DS-58dd750e-e516-458a-bbe3-cb517119877e,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50287,DS-486a95fa-e3cb-4031-9f8b-99bab17651a7,DISK]] are bad. Aborting...
 at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
 at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
 at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
 at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)


Stack Trace:
java.lang.AssertionError: Expected to find 'DatanodeInfoWithStorage[127.0.0.1:50287,DS-58dd750e-e516-458a-bbe3-cb517119877e,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50287,DS-486a95fa-e3cb-4031-9f8b-99bab17651a7,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)

	at org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:248)
	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:686)
Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50287,DS-486a95fa-e3cb-4031-9f8b-99bab17651a7,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)




Jenkins build is back to normal : Hadoop-Hdfs-trunk #3115

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3115/changes>


---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3114

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3114/changes>

Changes:

[aw] HADOOP-12866. add a subcommand for gridmix (Kai Sasaki via aw)

[jlowe] MAPREDUCE-6689. MapReduce job can infinitely increase number of reducer

[wangda] getApplicationReport call may raise NPE for removed queues. (Jian He via

------------------------------------------
[...truncated 5190 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.469 sec - in org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.269 sec - in org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.597 sec - in org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.728 sec - in org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.499 sec - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.147 sec - in org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.463 sec - in org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.694 sec - in org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.398 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.908 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.994 sec - in org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.021 sec - in org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.505 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.958 sec - in org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.466 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.459 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.628 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.552 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.21 sec - in org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.358 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.651 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.299 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.584 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.039 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.576 sec - in org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.711 sec - in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 148.035 sec - in org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.339 sec - in org.apache.hadoop.hdfs.TestDisableConnCache
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.041 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.159 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.174 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.294 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.094 sec - in org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.496 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.826 sec - in org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.601 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.295 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.294 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.369 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.111 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.02 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.TestAclsEndToEnd
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.075 sec - in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.594 sec - in org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.265 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.906 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.395 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.314 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.769 sec - in org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.413 sec - in org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.66 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.041 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.93 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.533 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.997 sec - in org.apache.hadoop.hdfs.TestAclsEndToEnd
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.992 sec - in org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.992 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.889 sec - in org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.989 sec - in org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.999 sec - in org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.074 sec - in org.apache.hadoop.hdfs.TestBlockMissingException
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.126 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.904 sec - in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.638 sec - in org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.455 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.713 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.162 sec - in org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestClose
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.187 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.626 sec - in org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.48 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.241 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.tracing.TestTracing
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.679 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.165 sec - in org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.309 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.097 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.431 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.158 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120

Results :

Failed tests: 
  TestFsDatasetImpl.testCleanShutdownOfVolume:686 Expected to find 'DatanodeInfoWithStorage[127.0.0.1:50287,DS-58dd750e-e516-458a-bbe3-cb517119877e,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50287,DS-486a95fa-e3cb-4031-9f8b-99bab17651a7,DISK]] are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)


Tests run: 4414, Failures: 1, Errors: 0, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:03 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [56:15 min]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.118 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 h
[INFO] Finished at: 2016-05-06T23:51:30+00:00
[INFO] Final Memory: 59M/853M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Build failed in Jenkins: Hadoop-Hdfs-trunk #3113

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/3113/changes>

Changes:

[iwasakims] HDFS-2043. TestHFlush failing intermittently. Contributed by Lin Yiqun.

------------------------------------------
[...truncated 11152 lines...]
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"Timer-9" daemon prio=5 tid=135 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Object.wait(Native Method)
        at java.util.TimerThread.mainLoop(Timer.java:552)
        at java.util.TimerThread.run(Timer.java:505)
"IPC Server handler 5 on 41228" daemon prio=5 tid=483 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@7a1d2922" daemon prio=5 tid=106 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 0 on 41504" daemon prio=5 tid=256 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@30ef19d4" daemon prio=5 tid=169 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192)
        at java.lang.Thread.run(Thread.java:745)
"Async disk worker #0 for volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data15/current"> daemon prio=5 tid=583 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
"refreshUsed-<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/data/data3/current/BP-2116724570-67.195.81.147-1462522228298"> daemon prio=5 tid=285 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:158)
        at java.lang.Thread.run(Thread.java:745)
"IPC Server handler 1 on 41228" daemon prio=5 tid=479 timed_waiting
java.lang.Thread.State: TIMED_WAITING
        at sun.misc.Unsafe.park(Native Method)
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.512 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 70.851 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestCrcCorruption
testCorruptionDuringWrt(org.apache.hadoop.hdfs.TestCrcCorruption)  Time elapsed: 50.314 sec  <<< ERROR!
java.lang.Exception: test timed out after 50000 milliseconds
	at java.lang.Object.wait(Native Method)
	at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
	at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:136)

     
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.078 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.728 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.323 sec - in org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 118.723 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.404 sec - in org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.134 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.274 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.867 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.687 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 43, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.072 sec - in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.681 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.88 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.103 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.174 sec - in org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.682 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.331 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.749 sec - in org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.527 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.053 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.141 sec - in org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.515 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.027 sec - in org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.363 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.149 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.security.TestPermission
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.202 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.387 sec - in org.apache.hadoop.security.TestPermission
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.754 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.621 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.24 sec - in org.apache.hadoop.cli.TestHDFSCLI

Results :

Failed tests: 
  TestShortCircuitCache.testDataXceiverCleansUpSlotsOnFailure:682->checkNumberOfSegmentsAndSlots:628 expected:<1> but was:<2>
  TestDFSUpgradeWithHA.testRollbackWithJournalNodes:687 null

Tests in error: 
  TestRollingFileSystemSinkWithSecureHdfs.testMissingPropertiesWithSecureHDFS:146->createDirectoriesSecurely:206 » IO
  TestAsyncDFSRename.testAggressiveConcurrentAsyncRenameWithOverwrite:184->internalTestConcurrentAsyncRenameWithOverwrite:221->Object.wait:503->Object.wait:-2 » 
  TestAsyncDFSRename.testCallGetReturnValueMultipleTimes:124 » IO Cannot remove ...
  TestFileAppend.testMultipleAppends » IO Failed to replace a bad datanode on th...
  TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite:191->internalTestConcurrentAsyncRenameWithOverwrite:220->Object.wait:-2 » 
  TestEncryptionZones.testStartFileRetry:1176 »  test timed out after 120000 mil...
  TestDFSInputStream.testSkipWithRemoteBlockReader:77 » NoClassDefFound org/apac...
  TestFileCorruption.testArrayOutOfBoundsException:136 » NoClassDefFound org/apa...
  TestDFSInputStream.testSeekToNewSource:120 » NoClassDefFound org/apache/hadoop...
  TestDFSInputStream.testSkipWithRemoteBlockReader2:88 » NoClassDefFound org/apa...
  TestDFSInputStream.testSkipWithLocalBlockReader:106 » NoClassDefFound org/apac...
  TestFileCorruption.testCorruptionWithDiskFailure:185 » NoClassDefFound org/apa...
  TestFileCorruption.testFileCorruption:79 » NoClassDefFound org/apache/hadoop/s...
  TestBlockReaderLocal.testBlockReaderLocalReadZeroBytesNoChecksum:695->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderSimpleReads:258->runBlockReaderLocalTest:168 » IO
  TestFileAppend4.testUpdateNeededReplicationsForAppendedFile:315 » Timeout Time...
  TestBlockReaderLocal.testBlockReaderLocalByteBufferFastLaneReadsNoChecksum:430->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalWithMlockChanges:587->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalReadCorrupt:535->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalArrayReads2:308->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalReadCorruptStart:484->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalOnFileWithoutChecksum:660->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalReadZeroBytes:688->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalReadCorruptNoChecksum:542->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalByteBufferReadsNoChecksum:359->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalWithMlockChangesNoChecksum:594->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalArrayReads2NoChecksum:315->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalByteBufferFastLaneReads:423->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderSimpleReadsNoChecksum:270->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalByteBufferReads:352->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderLocalOnFileWithoutChecksumNoChecksum:667->runBlockReaderLocalTest:168 » IO
  TestBlockReaderLocal.testBlockReaderSimpleReadsShortReadahead:264->runBlockReaderLocalTest:168 » IO
  TestCrcCorruption.testCorruptionDuringWrt:136->Object.wait:-2 »  test timed ou...

Tests run: 4411, Failures: 2, Errors: 33, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [05:15 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:16 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.153 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:21 h
[INFO] Finished at: 2016-05-06T08:15:07+00:00
[INFO] Final Memory: 57M/828M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx2048m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7072230321739974664.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire1440593190072320849tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_5326508163860081582313tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org