You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2015/05/06 17:32:54 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #2117

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/changes>

Changes:

[jlowe] YARN-3552. RM Web UI shows -1 running containers for completed apps. Contributed by Rohith

[wang] HADOOP-11120. hadoop fs -rmr gives wrong advice. Contributed by Juliet Houghland.

[junping_du] YARN-3396. Handle URISyntaxException in ResourceLocalizationService. (Contributed by Brahma Reddy Battula)

[aw] HADOOP-11911. test-patch should allow configuration of default branch (Sean Busbey via aw)

[xgong] YARN-2123. Progress bars in Web UI always at 100% due to non-US locale.

[cmccabe] HDFS-8305: HDFS INotify: the destination field of RenameOp should always end with the file name (cmccabe)

[aw] HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw)

[cmccabe] HDFS-7758. Retire FsDatasetSpi#getVolumes() and use FsDatasetSpi#getVolumeRefs() instead (Lei (Eddy) Xu via Colin P. McCabe)

[aw] HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups (aw)

[jianhe] YARN-3343. Increased TestCapacitySchedulerNodeLabelUpdate#testNodeUpdate timeout. Contributed by Rohith Sharmaks

[cmccabe] HDFS-7847. Modify NNThroughputBenchmark to be able to operate on a remote NameNode (Charles Lamb via Colin P. McCabe)

[xyao] HDFS-8219. setStoragePolicy with folder behavior is different after cluster restart. (surendra singh lilhore via Xiaoyu Yao)

[rkanter] MAPREDUCE-6192. Create unit test to automatically compare MR related classes and mapred-default.xml (rchiang via rkanter)

[wheat9] HDFS-8314. Move HdfsServerConstants#IO_FILE_BUFFER_SIZE and SMALL_BUFFER_SIZE to the users. Contributed by Li Lu.

[aw] HADOOP-11926. test-patch.sh mv does wrong math (aw)

[cmccabe] HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix (Masatake Iwasaki via Colin P. McCabe)

[xgong] YARN-3582. NPE in WebAppProxyServlet. Contributed by Jian He

------------------------------------------
[...truncated 6983 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.608 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.089 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.508 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.555 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.385 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.664 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.905 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.678 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.989 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.423 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.634 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.069 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.476 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.396 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.494 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.525 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.619 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.568 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.683 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.465 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.869 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.746 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.851 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.236 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.171 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.451 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.865 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.415 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.995 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.51 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.797 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.049 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.431 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.673 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.326 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.139 sec <<< FAILURE! - in org.apache.hadoop.tracing.TestTraceAdmin
testCreateAndDestroySpanReceiver(org.apache.hadoop.tracing.TestTraceAdmin)  Time elapsed: 4.056 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.577 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.976 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.036 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.261 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.683 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.985 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.157 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.918 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Tests in error: 
  TestReplicationPolicyWithNodeGroup.<clinit>:77 » NoClassDefFound org/apache/ha...
  TestReplicationPolicyWithNodeGroup.testChooseTargetWithDependencies » NoClassDefFound
  TestReplicationPolicyWithNodeGroup.testChooseTarget1 » NoClassDefFound Could n...
  TestReplicationPolicyWithNodeGroup.testChooseTarget2 » NoClassDefFound Could n...
  TestReplicationPolicyWithNodeGroup.testChooseTarget3 » NoClassDefFound Could n...
  TestReplicationPolicyWithNodeGroup.testChooseTarget4 » NoClassDefFound Could n...
  TestReplicationPolicyWithNodeGroup.testChooseTarget5 » NoClassDefFound Could n...
  TestReplicationPolicyWithNodeGroup.testChooseTargetsOnBoundaryTopology » NoClassDefFound
  TestReplicationPolicyWithNodeGroup.testRereplicate1 » NoClassDefFound Could no...
  TestReplicationPolicyWithNodeGroup.testRereplicate2 » NoClassDefFound Could no...
  TestReplicationPolicyWithNodeGroup.testRereplicate3 » NoClassDefFound Could no...
  TestReplicationPolicyWithNodeGroup.testChooseMoreTargetsThanNodeGroups » NoClassDefFound
  TestReplicationPolicyWithNodeGroup.testChooseReplicaToDelete » NoClassDefFound
  TestFsck.testFsckForSnapshotFiles:1258 » Runtime java.util.zip.ZipException: i...
  TestFsck.testFsckMisPlacedReplicas:1097 » Runtime java.util.zip.ZipException: ...
  TestFsck.testBlockIdCKDecommission:1355 » Runtime java.util.zip.ZipException: ...
  TestFsck.testCorruptBlock:644 » Runtime java.util.zip.ZipException: invalid st...
  TestFsck.testFsckMove:324 » Runtime java.util.zip.ZipException: invalid stored...
  TestFsck.testFsckListCorruptFilesBlocks:908 » Runtime java.util.zip.ZipExcepti...
  TestFsck.testBlockIdCKCorruption:1439 » Runtime java.util.zip.ZipException: in...
  TestFsck.testFsckError:876 » Runtime java.util.zip.ZipException: invalid store...
  TestFsck.testFsckFileNotFound:1160 » NoClassDefFound org/apache/hadoop/ha/HASe...
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3360, Failures: 0, Errors: 14, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 48.314 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  03:11 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.212 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:12 h
[INFO] Finished at: 2015-05-06T15:32:15+00:00
[INFO] Final Memory: 76M/687M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362671 bytes
Compression is 0.0%
Took 15 sec
Recording test results
Updating YARN-3582
Updating YARN-3552
Updating HADOOP-11120
Updating HADOOP-11904
Updating HADOOP-11912
Updating HDFS-8314
Updating HADOOP-11911
Updating HADOOP-11926
Updating HADOOP-11917
Updating YARN-3396
Updating MAPREDUCE-6192
Updating HDFS-8305
Updating HDFS-7758
Updating HDFS-7847
Updating HDFS-8219
Updating YARN-2123
Updating YARN-3343

Hadoop-Hdfs-trunk - Build # 2118 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2118/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6839 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.601 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-07T14:26:52+00:00
[INFO] Final Memory: 54M/681M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362786 bytes
Compression is 0.0%
Took 6.2 sec
Recording test results
Updating YARN-3243
Updating YARN-3580
Updating YARN-3577
Updating HADOOP-11813
Updating YARN-3385
Updating MAPREDUCE-6356
Updating YARN-3491
Updating HDFS-8310
Updating YARN-3301
Updating HDFS-8325
Updating HDFS-7833
Updating HADOOP-10387
Updating HDFS-2484
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Hadoop-Hdfs-trunk - Build # 2122 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2122/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6890 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.224 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-11T14:19:48+00:00
[INFO] Final Memory: 59M/705M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3162287917913898657.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5640927633655737717tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4629163836801978104477tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362828 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-8351
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.tools.TestHdfsConfigFields.testCompareXmlAgainstConfigurationClass

Error Message:
hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Stack Trace:
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Hadoop-Hdfs-trunk - Build # 2124 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2124/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6875 lines...]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.260 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.071 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-13T14:20:19+00:00
[INFO] Final Memory: 52M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362741 bytes
Compression is 0.0%
Took 16 sec
Recording test results
Updating HADOOP-9723
Updating MAPREDUCE-6361
Updating YARN-3613
Updating YARN-3539
Updating HDFS-6184
Updating MAPREDUCE-6366
Updating MAPREDUCE-6251
Updating HDFS-8255
Updating HADOOP-11962
Updating YARN-3629
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.tools.TestHdfsConfigFields.testCompareXmlAgainstConfigurationClass

Error Message:
hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Stack Trace:
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Hadoop-Hdfs-trunk - Build # 2126 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2126/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6845 lines...]
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.532 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-05-15T14:30:17+00:00
[INFO] Final Memory: 52M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363010 bytes
Compression is 0.0%
Took 12 sec
Recording test results
Updating MAPREDUCE-5708
Updating HDFS-8371
Updating HADOOP-11713
Updating YARN-3505
Updating HADOOP-11960
Updating HDFS-8350
Updating HDFS-6888
Updating MAPREDUCE-6273
Updating YARN-1519
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs.org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs

Error Message:
org/apache/hadoop/conf/ReconfigurableBase

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/conf/ReconfigurableBase
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1398)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:835)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:471)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:430)
	at org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs.clusterSetupAtBeginning(TestViewFileSystemWithXAttrs.java:62)


FAILED:  org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs.org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs.ClusterShutdownAtEnd(TestViewFileSystemWithXAttrs.java:74)



Hadoop-Hdfs-trunk - Build # 2127 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2127/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8067 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.643 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.070 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-16T14:19:55+00:00
[INFO] Final Memory: 68M/696M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362668 bytes
Compression is 0.0%
Took 27 sec
Recording test results
Updating HDFS-8394
Updating HDFS-8403
Updating HDFS-8397
Updating YARN-3505
Updating YARN-3526
Updating YARN-2421
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk - Build # 2129 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2129/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8063 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.737 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-18T14:21:06+00:00
[INFO] Final Memory: 54M/698M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362686 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HADOOP-11939
Updating HADOOP-10582
Updating HDFS-8332
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk - Build # 2130 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2130/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8073 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.824 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:46 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:47 h
[INFO] Finished at: 2015-05-19T14:21:36+00:00
[INFO] Final Memory: 54M/696M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362791 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HADOOP-11581
Updating HADOOP-11949
Updating HADOOP-11944
Updating YARN-3541
Updating HADOOP-1540
Updating HADOOP-8934
Updating HADOOP-10971
Updating HADOOP-11103
Updating HADOOP-11884
Updating HDFS-8345
Updating HDFS-8405
Updating HDFS-8412
Updating HDFS-6348
Updating HDFS-4185
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk - Build # 2133 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2133/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6833 lines...]
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.837 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-22T14:20:00+00:00
[INFO] Final Memory: 61M/678M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363209 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK], DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4ccccb6c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)



Hadoop-Hdfs-trunk - Build # 2134 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2134/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6844 lines...]
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.291 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.068 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-23T14:16:49+00:00
[INFO] Final Memory: 60M/719M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362646 bytes
Compression is 0.0%
Took 34 sec
Recording test results
Updating YARN-3701
Updating YARN-3707
Updating MAPREDUCE-6204
Updating HADOOP-11927
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST

Error Message:
dir has ERROR

Stack Trace:
java.lang.IllegalStateException: dir has ERROR
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:430)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.stop(TestAppendSnapshotTruncate.java:483)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST(TestAppendSnapshotTruncate.java:128)
Caused by: java.lang.IllegalStateException: null
	at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.pause(TestAppendSnapshotTruncate.java:479)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.pauseAllFiles(TestAppendSnapshotTruncate.java:247)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:220)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:140)
	at org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker$1.run(TestAppendSnapshotTruncate.java:454)
	at java.lang.Thread.run(Thread.java:745)



Hadoop-Hdfs-trunk - Build # 2135 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2135/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8079 lines...]
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.530 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.072 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-24T14:20:12+00:00
[INFO] Final Memory: 54M/685M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362716 bytes
Compression is 0.0%
Took 6.9 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Hdfs-trunk - Build # 2139 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7189 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.787 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:40 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:41 h
[INFO] Finished at: 2015-05-28T14:16:02+00:00
[INFO] Final Memory: 60M/679M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363168 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating YARN-3700
Updating YARN-3581
Updating MAPREDUCE-6304
Updating YARN-2355
Updating HADOOP-9891
Updating YARN-3722
Updating HDFS-8431
Updating HDFS-8482
Updating HDFS-8135
Updating MAPREDUCE-6336
Updating HDFS-5033
Updating YARN-3626
Updating YARN-3647
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:909)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:905)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1Internal(TestBalancer.java:921)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer1(TestBalancer.java:917)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2Internal(TestBalancer.java:948)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer2(TestBalancer.java:944)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithIncludeListWithPorts

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithIncludeListWithPorts(TestBalancer.java:1208)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:821)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithExcludeList

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerCliWithExcludeList(TestBalancer.java:1103)


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithExcludeListWithPorts

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block lengths
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
	at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.initConf(TestBalancer.java:116)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithExcludeListWithPorts(TestBalancer.java:1090)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl.testSkipAclEnforcementSuper

Error Message:
org/apache/hadoop/util/IdentityHashStore$Visitor

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/IdentityHashStore$Visitor
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionGranted(AclTestHelpers.java:137)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementSuper(FSAclBaseTest.java:1191)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.IdentityHashStore$Visitor
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionGranted(AclTestHelpers.java:137)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementSuper(FSAclBaseTest.java:1191)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl.testSkipAclEnforcementPermsDisabled

Error Message:
org/apache/hadoop/util/IdentityHashStore$Visitor

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/IdentityHashStore$Visitor
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionDenied(AclTestHelpers.java:118)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementPermsDisabled(FSAclBaseTest.java:1171)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.IdentityHashStore$Visitor
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1188)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:308)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:304)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
	at org.apache.hadoop.hdfs.DFSTestUtil.readFileBuffer(DFSTestUtil.java:338)
	at org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.assertFilePermissionDenied(AclTestHelpers.java:118)
	at org.apache.hadoop.hdfs.server.namenode.FSAclBaseTest.testSkipAclEnforcementPermsDisabled(FSAclBaseTest.java:1171)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)



Jenkins build is back to normal : Hadoop-Hdfs-trunk #2141

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2141/changes>


Build failed in Jenkins: Hadoop-Hdfs-trunk #2140

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/changes>

Changes:

[aw] HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey via aw)

[aw] HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo Seki via aw)

[aw] HADOOP-12030. test-patch should only report on newly introduced findbugs warnings. (Sean Busbey via aw)

[xgong] YARN-3723. Need to clearly document primaryFilter and otherInfo value

[aw] HADOOP-11406. xargs -P is not portable (Kengo Seki via aw)

[aw] HADOOP-11142. Remove hdfs dfs reference from file system shell documentation (Kengo Seki via aw)

[aw] HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts (Kengo Seki via aw)

[aw] HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do (Sangjin Lee via aw)

[cmccabe] HDFS-8407. libhdfs hdfsListDirectory must set errno to 0 on success (Masatake Iwasaki via Colin P. McCabe)

[cmccabe] HDFS-8429. Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread.  (zhouyingchao via cmccabe)

[cmccabe] HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake Iwasaki via Colin P. McCabe)

[aw] HADOOP-11930. test-patch in offline mode should tell maven to be in offline mode (Sean Busbey via aw)

[cnauroth] HADOOP-11959. WASB should configure client side socket timeout in storage client blob request options. Contributed by Ivan Mitic.

[aw]  HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits (aw)

[cnauroth] HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop. Contributed by Larry McCay.

[vinodkv] Fixed more FilesSystemRMStateStore issues. Contributed by Vinod Kumar Vavilapalli.

[wangda] YARN-3716. Node-label-expression should be included by ResourceRequestPBImpl.toString. (Xianyin Xin via wangda)

[aajisaka] HDFS-8443. Document dfs.namenode.service.handler.count in hdfs-site.xml. Contributed by J.Andreina.

[vinayakumarb] HDFS-7401. Add block info to DFSInputStream' WARN message when it adds node to deadNodes (Contributed by Arshad Mohammad)

[vinayakumarb] HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by Andreina J)

------------------------------------------
[...truncated 6171 lines...]
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.671 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.614 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.731 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.718 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.317 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.242 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.881 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.731 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.908 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.875 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.474 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.49 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.633 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.959 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.269 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.766 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.556 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.15 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.758 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.807 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.105 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.171 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.825 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.72 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.286 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Running org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.232 sec - in org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.824 sec - in org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.504 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Running org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.796 sec - in org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.477 sec - in org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.935 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Running org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.369 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.439 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.475 sec - in org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.631 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsck
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec - in org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.342 sec - in org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Running org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.611 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Running org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.171 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.373 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.944 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.357 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.099 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.522 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.128 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.942 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.795 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.379 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Running org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.878 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.755 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.292 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.827 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.346 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.353 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.442 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.392 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.302 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.784 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.007 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.142 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.054 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.506 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.762 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.079 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.239 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.048 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.995 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.943 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.021 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints

Results :

Tests run: 2260, Failures: 0, Errors: 0, Skipped: 13

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 54.973 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:24 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:25 h
[INFO] Finished at: 2015-05-29T13:00:37+00:00
[INFO] Final Memory: 76M/931M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7381484223056387280.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5655336589549814393tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1014370353868269233357tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363207 bytes
Compression is 0.0%
Took 9.2 sec
Recording test results
Updating HADOOP-11983
Updating HADOOP-11934
Updating HADOOP-11894
Updating HADOOP-11959
Updating HADOOP-11142
Updating HADOOP-12004
Updating HDFS-7401
Updating HDFS-8443
Updating YARN-3723
Updating HDFS-8407
Updating HDFS-8429
Updating HADOOP-12035
Updating HADOOP-11406
Updating HADOOP-11930
Updating HADOOP-12022
Updating HADOOP-12030
Updating HADOOP-7947
Updating HADOOP-12042
Updating YARN-3716

Hadoop-Hdfs-trunk - Build # 2140 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6364 lines...]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 54.973 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:24 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:25 h
[INFO] Finished at: 2015-05-29T13:00:37+00:00
[INFO] Final Memory: 76M/931M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7381484223056387280.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5655336589549814393tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1014370353868269233357tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363207 bytes
Compression is 0.0%
Took 9.2 sec
Recording test results
Updating HADOOP-11983
Updating HADOOP-11934
Updating HADOOP-11894
Updating HADOOP-11959
Updating HADOOP-11142
Updating HADOOP-12004
Updating HDFS-7401
Updating HDFS-8443
Updating YARN-3723
Updating HDFS-8407
Updating HDFS-8429
Updating HADOOP-12035
Updating HADOOP-11406
Updating HADOOP-11930
Updating HADOOP-12022
Updating HADOOP-12030
Updating HADOOP-7947
Updating HADOOP-12042
Updating YARN-3716
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2139

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/changes>

Changes:

[wheat9] Update CHANGES.txt for HDFS-8135.

[wangda] YARN-3647. RMWebServices api's should use updated api from CommonNodeLabelsManager to get NodeLabel object. (Sunil G via wangda)

[wangda] MAPREDUCE-6304. Specifying node labels when submitting MR jobs. (Naganarasimha G R via wangda)

[cnauroth] YARN-3626. On Windows localized resources are not moved to the front of the classpath when they should be. Contributed by Craig Welch.

[gera] MAPREDUCE-6336. Enable v2 FileOutputCommitter by default. (Siqi Li via gera)

[wangda] YARN-3581. Deprecate -directlyAccessNodeLabelStore in RMAdminCLI. (Naganarasimha G R via wangda)

[wang] HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang.

[aw] HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw)

[aw] YARN-2355. MAX_APP_ATTEMPTS_ENV may no longer be a useful env var for a container (Darrell Taylor via aw)

[aw] HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source (Darrell Taylor via aw)

[zjshen] YARN-3700. Made generic history service load a number of latest applications according to the parameter or the configuration. Contributed by Xuan Gong.

[cnauroth] HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer.

[devaraj] YARN-3722. Merge multiple TestWebAppUtils into

------------------------------------------
[...truncated 6996 lines...]
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.407 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.429 sec - in org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.502 sec - in org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.969 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.459 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.349 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.692 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.107 sec - in org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.77 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.722 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.016 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.377 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.244 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.067 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.664 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.085 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.363 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.618 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.296 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.049 sec - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.378 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.911 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.133 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 178.364 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.102 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.43 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.804 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.042 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.09 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.044 sec - in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.387 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.058 sec - in org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.706 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.062 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.447 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.625 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.492 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.857 sec - in org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.59 sec - in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.027 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.534 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.086 sec - in org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.696 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.006 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.122 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.606 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.427 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.221 sec - in org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.475 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.925 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.804 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.659 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.797 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.031 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.556 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.34 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.059 sec - in org.apache.hadoop.security.TestRefreshUserMappings

Results :

Tests in error: 
  TestNameNodeAcl>FSAclBaseTest.testSkipAclEnforcementSuper:1191 » NoClassDefFound
  TestNameNodeAcl>FSAclBaseTest.testSkipAclEnforcementPermsDisabled:1171 » NoClassDefFound
  TestBalancer.testBalancer0:905->testBalancer0Internal:909->initConf:116 » Runtime
  TestBalancer.testBalancer1:917->testBalancer1Internal:921->initConf:116 » Runtime
  TestBalancer.testBalancer2:944->testBalancer2Internal:948->initConf:116 » Runtime
  TestBalancer.testBalancerCliWithIncludeListWithPorts:1208->initConf:116 » Runtime
  TestBalancer.testUnknownDatanode:821->initConf:116 » Runtime java.util.zip.Zip...
  TestBalancer.testBalancerCliWithExcludeList:1103->initConf:116 » Runtime java....
  TestBalancer.testBalancerWithExcludeListWithPorts:1090->initConf:116 » Runtime

Tests run: 3439, Failures: 0, Errors: 9, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.787 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:40 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:41 h
[INFO] Finished at: 2015-05-28T14:16:02+00:00
[INFO] Final Memory: 60M/679M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363168 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating YARN-3700
Updating YARN-3581
Updating MAPREDUCE-6304
Updating YARN-2355
Updating HADOOP-9891
Updating YARN-3722
Updating HDFS-8431
Updating HDFS-8482
Updating HDFS-8135
Updating MAPREDUCE-6336
Updating HDFS-5033
Updating YARN-3626
Updating YARN-3647

Build failed in Jenkins: Hadoop-Hdfs-trunk #2138

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2138/changes>

Changes:

[ozawa] MAPREDUCE-6364. Add a Kill link to Task Attempts page. Contributed by Ryu Kobayashi.

[vinodkv] YARN-160. Enhanced NodeManager to automatically obtain cpu/memory values from underlying OS when configured to do so. Contributed by Varun Vasudev.

[jianhe] YARN-3632. Ordering policy should be allowed to reorder an application when demand changes. Contributed by Craig Welch

[cmccabe] HADOOP-11969. ThreadLocal initialization in several classes is not thread safe (Sean Busbey via Colin P. McCabe)

[wangda] YARN-3686. CapacityScheduler should trim default_node_label_expression. (Sunil G via wangda)

[aajisaka] HADOOP-11242. Record the time of calling in tracing span of IPC server. Contributed by Mastake Iwasaki.

------------------------------------------
[...truncated 6656 lines...]
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.414 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.49 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.731 sec - in org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.587 sec - in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestTokenAspect
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.466 sec - in org.apache.hadoop.hdfs.web.TestTokenAspect
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.213 sec - in org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.276 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.697 sec - in org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.922 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.248 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.622 sec - in org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.196 sec - in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 28.729 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.918 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.749 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.583 sec - in org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.875 sec - in org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.201 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.374 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Running org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.405 sec - in org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.082 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.698 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.922 sec - in org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.476 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.941 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.08 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.714 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.474 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.698 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.773 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.748 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.19 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.825 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.641 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.457 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.482 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.465 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.17 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.551 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.39 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.463 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.06 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.461 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.043 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.72 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.605 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.723 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.999 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.39 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.504 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.696 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.089 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.143 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.633 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.905 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.253 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.619 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.764 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.96 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.863 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.821 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.236 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.023 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.176 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.88 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.169 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.706 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.434 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.946 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.467 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.421 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.916 sec - in org.apache.hadoop.TestGenericRefresh

Results :

Tests in error: 
  TestDFSUpgradeWithHA.testFinalizeWithJournalNodes:428 » IO java.lang.RuntimeEx...

Tests run: 3438, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.692 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-27T14:24:57+00:00
[INFO] Final Memory: 55M/705M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362660 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-3686
Updating HADOOP-11242
Updating YARN-160
Updating HADOOP-11969
Updating YARN-3632
Updating MAPREDUCE-6364

Hadoop-Hdfs-trunk - Build # 2138 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2138/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6849 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.692 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-27T14:24:57+00:00
[INFO] Final Memory: 55M/705M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362660 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating YARN-3686
Updating HADOOP-11242
Updating YARN-160
Updating HADOOP-11969
Updating YARN-3632
Updating MAPREDUCE-6364
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testFinalizeWithJournalNodes

Error Message:
java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out

Stack Trace:
java.io.IOException: java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.doGetUrl(TransferFsImage.java:414)
	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:399)
	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.downloadImageToStorage(TransferFsImage.java:116)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.downloadImage(BootstrapStandby.java:318)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.doRun(BootstrapStandby.java:204)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.access$000(BootstrapStandby.java:76)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:114)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:110)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:110)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:421)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testFinalizeWithJournalNodes(TestDFSUpgradeWithHA.java:428)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2137

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2137/changes>

Changes:

[xgong] YARN-2238. Filtering on UI sticks even if I move away from the page.

[aajisaka] HADOOP-8751. NPE in Token.toString() when Token is constructed using null identifier. Contributed by kanaka kumar avvaru.

[ozawa] YARN-2336. Fair scheduler's REST API returns a missing '[' bracket JSON for deep queue tree. Contributed by Kenji Kikushima and Akira Ajisaka.

------------------------------------------
[...truncated 7880 lines...]
     [exec] 2015-05-26 14:16:25,111 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-26 14:16:25,111 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-26 14:16:25,113 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 60336
     [exec] 2015-05-26 14:16:25,113 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-26 14:16:25,170 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:60336
     [exec] 2015-05-26 14:16:25,299 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(159)) - Listening HTTP traffic on /127.0.0.1:36631
     [exec] 2015-05-26 14:16:25,301 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-26 14:16:25,301 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-26 14:16:25,314 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-26 14:16:25,315 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 43648
     [exec] 2015-05-26 14:16:25,323 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:43648
     [exec] 2015-05-26 14:16:25,335 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-26 14:16:25,338 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-26 14:16:25,348 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:56314 starting to offer service
     [exec] 2015-05-26 14:16:25,353 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-26 14:16:25,573 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 43648: starting
     [exec] 2015-05-26 14:16:25,782 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 27540@asf909.gq1.ygridcore.net
     [exec] 2015-05-26 14:16:25,782 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,782 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-26 14:16:25,823 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454>
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-26 14:16:25,824 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-648551470-67.195.81.153-1432649783454 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454/current>
     [exec] 2015-05-26 14:16:25,826 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 27540@asf909.gq1.ygridcore.net
     [exec] 2015-05-26 14:16:25,827 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,827 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454>
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454> is not formatted for BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-26 14:16:25,861 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-648551470-67.195.81.153-1432649783454 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454/current>
     [exec] 2015-05-26 14:16:25,863 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=1443975048;bpid=BP-648551470-67.195.81.153-1432649783454;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=1443975048;c=0;bpid=BP-648551470-67.195.81.153-1432649783454;dnuuid=null
     [exec] 2015-05-26 14:16:25,865 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,886 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3
     [exec] 2015-05-26 14:16:25,887 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-26 14:16:25,887 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c
     [exec] 2015-05-26 14:16:25,887 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-26 14:16:25,890 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-26 14:16:25,897 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432669258897 with interval 21600000
     [exec] 2015-05-26 14:16:25,897 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:25,898 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-26 14:16:25,899 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-26 14:16:25,910 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-648551470-67.195.81.153-1432649783454 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 10ms
     [exec] 2015-05-26 14:16:25,910 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-648551470-67.195.81.153-1432649783454 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 11ms
     [exec] 2015-05-26 14:16:25,910 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-648551470-67.195.81.153-1432649783454: 13ms
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-26 14:16:25,911 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-648551470-67.195.81.153-1432649783454/current/replicas> doesn't exist 
     [exec] 2015-05-26 14:16:25,911 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-648551470-67.195.81.153-1432649783454/current/replicas> doesn't exist 
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 1ms
     [exec] 2015-05-26 14:16:25,911 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-648551470-67.195.81.153-1432649783454 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-26 14:16:25,912 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 1ms
     [exec] 2015-05-26 14:16:25,913 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 beginning handshake with NN
     [exec] 2015-05-26 14:16:25,922 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:52030, datanodeUuid=f7278dd1-3fbc-45fd-be83-aa123f099904, infoPort=36631, infoSecurePort=0, ipcPort=43648, storageInfo=lv=-56;cid=testClusterID;nsid=1443975048;c=0) storage f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,922 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-26 14:16:25,923 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:52030
     [exec] 2015-05-26 14:16:25,925 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-26 14:16:25,925 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-26 14:16:25,929 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 successfully registered with NN
     [exec] 2015-05-26 14:16:25,929 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:56314 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-26 14:16:25,940 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-26 14:16:25,940 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3 for DN 127.0.0.1:52030
     [exec] 2015-05-26 14:16:25,941 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c for DN 127.0.0.1:52030
     [exec] 2015-05-26 14:16:25,950 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-26 14:16:25,950 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314
     [exec] 2015-05-26 14:16:25,963 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c from datanode f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,964 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-b3e2f79b-6625-4f92-bf87-f72fdffbc23c node DatanodeRegistration(127.0.0.1:52030, datanodeUuid=f7278dd1-3fbc-45fd-be83-aa123f099904, infoPort=36631, infoSecurePort=0, ipcPort=43648, storageInfo=lv=-56;cid=testClusterID;nsid=1443975048;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-26 14:16:25,964 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3 from datanode f7278dd1-3fbc-45fd-be83-aa123f099904
     [exec] 2015-05-26 14:16:25,965 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-fdd87e77-6458-4ad4-a2b7-7430eac42da3 node DatanodeRegistration(127.0.0.1:52030, datanodeUuid=f7278dd1-3fbc-45fd-be83-aa123f099904, infoPort=36631, infoSecurePort=0, ipcPort=43648, storageInfo=lv=-56;cid=testClusterID;nsid=1443975048;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-26 14:16:25,989 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xc5f894496d527836,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 36 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-26 14:16:25,989 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:26,034 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-26 14:16:26,046 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-26 14:16:26,046 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-26 14:16:26,046 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-26 14:16:26,047 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-26 14:16:26,048 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-26 14:16:26,160 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 43648
     [exec] 2015-05-26 14:16:26,161 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 43648
     [exec] 2015-05-26 14:16:26,161 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314 interrupted
     [exec] 2015-05-26 14:16:26,161 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-26 14:16:26,162 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904) service to localhost/127.0.0.1:56314
     [exec] 2015-05-26 14:16:26,265 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-648551470-67.195.81.153-1432649783454 (Datanode Uuid f7278dd1-3fbc-45fd-be83-aa123f099904)
     [exec] 2015-05-26 14:16:26,266 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-648551470-67.195.81.153-1432649783454
     [exec] 2015-05-26 14:16:26,267 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-26 14:16:26,267 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-26 14:16:26,267 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-26 14:16:26,268 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-26 14:16:26,273 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-26 14:16:26,273 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-26 14:16:26,273 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-26 14:16:26,274 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-26 14:16:26,274 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 2 2 
     [exec] 2015-05-26 14:16:26,274 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-26 14:16:26,275 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-26 14:16:26,276 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-26 14:16:26,277 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 56314
     [exec] 2015-05-26 14:16:26,278 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 56314
     [exec] 2015-05-26 14:16:26,278 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-26 14:16:26,278 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-26 14:16:26,313 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-26 14:16:26,313 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-26 14:16:26,315 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-26 14:16:26,415 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-26 14:16:26,417 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-26 14:16:26,417 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.723 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:43 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-05-26T14:18:46+00:00
[INFO] Final Memory: 55M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362687 bytes
Compression is 0.0%
Took 6.3 sec
Recording test results
Updating YARN-2336
Updating HADOOP-8751
Updating YARN-2238

Hadoop-Hdfs-trunk - Build # 2137 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2137/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8073 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.723 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:43 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-05-26T14:18:46+00:00
[INFO] Final Memory: 55M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362687 bytes
Compression is 0.0%
Took 6.3 sec
Recording test results
Updating YARN-2336
Updating HADOOP-8751
Updating YARN-2238
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2136

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2136/changes>

Changes:

[wheat9] HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang.

------------------------------------------
[...truncated 7811 lines...]
java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:327)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:90)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:606)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:456)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:485)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:481)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:375)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:366)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:359)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:352)
	at org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs(TestEncryptionZones.java:621)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 15, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 130.297 sec - in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.311 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.189 sec - in org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.901 sec - in org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.92 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.571 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.311 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.25 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.74 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.097 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.4 sec - in org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.482 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.933 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.908 sec - in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.304 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.087 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.361 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.062 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.402 sec - in org.apache.hadoop.hdfs.TestFsShellPermission
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.101 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.964 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.047 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.515 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec - in org.apache.hadoop.hdfs.TestDFSConfigKeys
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.218 sec - in org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Running org.apache.hadoop.hdfs.TestReplication
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.653 sec - in org.apache.hadoop.hdfs.TestReplication
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.324 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.393 sec - in org.apache.hadoop.hdfs.TestPipelines
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.553 sec - in org.apache.hadoop.hdfs.TestDeprecatedKeys
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.91 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.9 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.402 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.359 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.669 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.634 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.66 sec - in org.apache.hadoop.hdfs.TestReadWhileWriting
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.995 sec - in org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.508 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.103 sec - in org.apache.hadoop.hdfs.TestSetrepDecreasing
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.864 sec - in org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.384 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.309 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.232 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.475 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS

Results :

Tests in error: 
  TestFileTruncate.testTruncateFailure » IO Failed to replace a bad datanode on ...
  TestFileTruncate.testSnapshotWithAppendTruncate » IO Failed to replace a bad d...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestFileTruncate.setup:119 » Remote The directory /test cannot be deleted sinc...
  TestEncryptionZonesWithKMS>TestEncryptionZones.testReadWriteUsingWebHdfs:621 » SocketTimeout

Tests run: 3438, Failures: 0, Errors: 12, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.255 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-25T14:17:21+00:00
[INFO] Final Memory: 67M/697M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362814 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating HDFS-8377

Hadoop-Hdfs-trunk - Build # 2136 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2136/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8004 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.255 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:43 h
[INFO] Finished at: 2015-05-25T14:17:21+00:00
[INFO] Final Memory: 67M/697M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362814 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating HDFS-8377
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
12 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testReadWriteUsingWebHdfs

Error Message:
Read timed out

Stack Trace:
java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:152)
	at java.net.SocketInputStream.read(SocketInputStream.java:122)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
	at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:327)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:90)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:606)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:456)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:485)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:481)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:881)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:375)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:366)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:359)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:352)
	at org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs(TestEncryptionZones.java:621)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-42d18bd2-2747-4d55-b26a-32237469d4af,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-91fbb18e-c271-4086-9fe8-f3ffb526c4c4,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testSnapshotWithAppendTruncate

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK], DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK], DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:54188,DS-957453fc-93be-4a09-b57a-d50cbf02f112,DISK], DatanodeInfoWithStorage[127.0.0.1:34270,DS-abc66bf4-06f1-48ba-aa4c-cf685f0973f2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:586)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testCopyOnTruncateWithDataNodesRestart

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testSnapshotWithTruncates

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateRecovery

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateShellCommandOnBlockBoundary

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testSnapshotTruncateThenDeleteSnapshot

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateEditLogLoad

Error Message:
The directory /test cannot be deleted since /test is snapshottable and already has snapshots
 at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
 at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted since /test is snapshottable and already has snapshots
	at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
	at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3154)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:932)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2171)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1440)
	at org.apache.hadoop.ipc.Client.call(Client.java:1371)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy22.delete(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
	at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
	at com.sun.proxy.$Proxy23.delete(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1715)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
	at org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2135

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2135/>

------------------------------------------
[...truncated 7886 lines...]
     [exec] 2015-05-24 14:17:47,502 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(284)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
     [exec] 2015-05-24 14:17:47,502 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.datanode is not defined
     [exec] 2015-05-24 14:17:47,503 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
     [exec] 2015-05-24 14:17:47,504 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-24 14:17:47,504 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-24 14:17:47,506 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 57485
     [exec] 2015-05-24 14:17:47,506 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-24 14:17:47,557 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:57485
     [exec] 2015-05-24 14:17:47,678 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(162)) - Listening HTTP traffic on /127.0.0.1:60590
     [exec] 2015-05-24 14:17:47,680 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-24 14:17:47,680 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-24 14:17:47,693 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-24 14:17:47,694 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 48866
     [exec] 2015-05-24 14:17:47,701 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:48866
     [exec] 2015-05-24 14:17:47,713 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-24 14:17:47,715 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-24 14:17:47,725 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:48928 starting to offer service
     [exec] 2015-05-24 14:17:47,732 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-24 14:17:47,732 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 48866: starting
     [exec] 2015-05-24 14:17:48,174 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 25938@asf904.gq1.ygridcore.net
     [exec] 2015-05-24 14:17:48,174 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,174 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-24 14:17:48,223 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862>
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-24 14:17:48,224 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-149846206-67.195.81.148-1432477065862 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862/current>
     [exec] 2015-05-24 14:17:48,227 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 25938@asf904.gq1.ygridcore.net
     [exec] 2015-05-24 14:17:48,227 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,227 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862>
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862> is not formatted for BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-24 14:17:48,269 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-149846206-67.195.81.148-1432477065862 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862/current>
     [exec] 2015-05-24 14:17:48,271 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=407141728;bpid=BP-149846206-67.195.81.148-1432477065862;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=407141728;c=0;bpid=BP-149846206-67.195.81.148-1432477065862;dnuuid=null
     [exec] 2015-05-24 14:17:48,272 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,294 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5
     [exec] 2015-05-24 14:17:48,294 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-24 14:17:48,295 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-392e90f7-b807-49fa-a540-7f799afac17f
     [exec] 2015-05-24 14:17:48,295 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-24 14:17:48,299 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-24 14:17:48,306 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(333)) - Periodic Directory Tree Verification scan starting at 1432495683306 with interval 21600000
     [exec] 2015-05-24 14:17:48,306 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,306 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-24 14:17:48,307 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-24 14:17:48,320 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-149846206-67.195.81.148-1432477065862 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 14ms
     [exec] 2015-05-24 14:17:48,320 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-149846206-67.195.81.148-1432477065862 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 13ms
     [exec] 2015-05-24 14:17:48,320 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-149846206-67.195.81.148-1432477065862: 14ms
     [exec] 2015-05-24 14:17:48,321 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-24 14:17:48,321 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-149846206-67.195.81.148-1432477065862/current/replicas> doesn't exist 
     [exec] 2015-05-24 14:17:48,321 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-24 14:17:48,322 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-24 14:17:48,322 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-149846206-67.195.81.148-1432477065862/current/replicas> doesn't exist 
     [exec] 2015-05-24 14:17:48,322 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-149846206-67.195.81.148-1432477065862 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-24 14:17:48,322 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-24 14:17:48,324 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 beginning handshake with NN
     [exec] 2015-05-24 14:17:48,335 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(884)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:58314, datanodeUuid=7f2bd5fe-b13a-4066-8bc6-5a9476738d19, infoPort=60590, infoSecurePort=0, ipcPort=48866, storageInfo=lv=-56;cid=testClusterID;nsid=407141728;c=0) storage 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,335 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-24 14:17:48,336 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,339 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 successfully registered with NN
     [exec] 2015-05-24 14:17:48,339 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:48928 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-24 14:17:48,348 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2332)) - No heartbeat from DataNode: 127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,348 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-24 14:17:48,349 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-24 14:17:48,349 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5 for DN 127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,350 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-392e90f7-b807-49fa-a540-7f799afac17f for DN 127.0.0.1:58314
     [exec] 2015-05-24 14:17:48,358 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-24 14:17:48,358 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928
     [exec] 2015-05-24 14:17:48,370 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-392e90f7-b807-49fa-a540-7f799afac17f from datanode 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,370 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-392e90f7-b807-49fa-a540-7f799afac17f node DatanodeRegistration(127.0.0.1:58314, datanodeUuid=7f2bd5fe-b13a-4066-8bc6-5a9476738d19, infoPort=60590, infoSecurePort=0, ipcPort=48866, storageInfo=lv=-56;cid=testClusterID;nsid=407141728;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-24 14:17:48,371 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1823)) - Processing first storage report for DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5 from datanode 7f2bd5fe-b13a-4066-8bc6-5a9476738d19
     [exec] 2015-05-24 14:17:48,371 INFO  BlockStateChange (BlockManager.java:processReport(1872)) - BLOCK* processReport: from storage DS-0d10a1d2-f63f-456c-8102-e177a08ab8c5 node DatanodeRegistration(127.0.0.1:58314, datanodeUuid=7f2bd5fe-b13a-4066-8bc6-5a9476738d19, infoPort=60590, infoSecurePort=0, ipcPort=48866, storageInfo=lv=-56;cid=testClusterID;nsid=407141728;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs
     [exec] 2015-05-24 14:17:48,386 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x113e6b703b11537a,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 25 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-24 14:17:48,386 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,454 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-24 14:17:48,461 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-24 14:17:48,462 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-24 14:17:48,462 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-24 14:17:48,462 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(379)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-24 14:17:48,464 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-24 14:17:48,574 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 48866
     [exec] 2015-05-24 14:17:48,575 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 48866
     [exec] 2015-05-24 14:17:48,575 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928 interrupted
     [exec] 2015-05-24 14:17:48,575 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-24 14:17:48,575 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19) service to localhost/127.0.0.1:48928
     [exec] 2015-05-24 14:17:48,679 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-149846206-67.195.81.148-1432477065862 (Datanode Uuid 7f2bd5fe-b13a-4066-8bc6-5a9476738d19)
     [exec] 2015-05-24 14:17:48,679 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-149846206-67.195.81.148-1432477065862
     [exec] 2015-05-24 14:17:48,681 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-24 14:17:48,682 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-24 14:17:48,682 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-24 14:17:48,682 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-24 14:17:48,687 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-24 14:17:48,687 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-24 14:17:48,687 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-24 14:17:48,687 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4205)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-24 14:17:48,688 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 1 
     [exec] 2015-05-24 14:17:48,688 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4125)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-24 14:17:48,689 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-24 14:17:48,690 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-24 14:17:48,692 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 48928
     [exec] 2015-05-24 14:17:48,692 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 48928
     [exec] 2015-05-24 14:17:48,692 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-24 14:17:48,692 INFO  blockmanagement.BlockManager (BlockManager.java:run(3693)) - Stopping ReplicationMonitor.
     [exec] 2015-05-24 14:17:48,719 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1219)) - Stopping services started for active state
     [exec] 2015-05-24 14:17:48,719 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1309)) - Stopping services started for standby state
     [exec] 2015-05-24 14:17:48,720 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-24 14:17:48,821 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-24 14:17:48,822 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-24 14:17:48,823 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.530 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.072 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-24T14:20:12+00:00
[INFO] Final Memory: 54M/685M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362716 bytes
Compression is 0.0%
Took 6.9 sec
Recording test results

Build failed in Jenkins: Hadoop-Hdfs-trunk #2134

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2134/changes>

Changes:

[ozawa] MAPREDUCE-6204. TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS.

[cmccabe] HADOOP-11927.  Fix "undefined reference to dlopen" error when compiling libhadooppipes (Xianyin Xin via Colin P. McCabe)

[xgong] YARN-3701. Isolating the error of generating a single app report when

[jianhe] YARN-3707. RM Web UI queue filter doesn't work. Contributed by Wangda Tan

------------------------------------------
[...truncated 6651 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.189 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.858 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.455 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.79 sec - in org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.672 sec - in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestTokenAspect
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.45 sec - in org.apache.hadoop.hdfs.web.TestTokenAspect
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.213 sec - in org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.169 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.702 sec - in org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.095 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.134 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec - in org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.092 sec - in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 28.621 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.925 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.942 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.515 sec - in org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.882 sec - in org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.2 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.365 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Running org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.399 sec - in org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.027 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.684 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.931 sec - in org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.466 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.887 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.939 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.728 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.489 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.643 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.775 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.917 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.196 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.776 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.602 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.44 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.636 sec - in org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.435 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.396 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.796 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.555 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.323 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.527 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.687 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.504 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.589 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.955 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.603 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.663 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.921 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.452 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.518 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.66 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.164 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.507 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.844 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.359 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.577 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.848 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.975 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.878 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.857 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.177 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.986 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.067 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.836 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.101 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.633 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.466 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.94 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.895 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.392 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.024 sec - in org.apache.hadoop.TestGenericRefresh

Results :

Tests in error: 
  TestAppendSnapshotTruncate.testAST:128 » IllegalState dir has ERROR

Tests run: 3437, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.291 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.068 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-23T14:16:49+00:00
[INFO] Final Memory: 60M/719M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362646 bytes
Compression is 0.0%
Took 34 sec
Recording test results
Updating YARN-3701
Updating YARN-3707
Updating MAPREDUCE-6204
Updating HADOOP-11927

Build failed in Jenkins: Hadoop-Hdfs-trunk #2133

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2133/changes>

Changes:

[aajisaka] YARN-3694. Fix dead link for TimelineServer REST API. Contributed by Jagadesh Kiran N.

[devaraj] YARN-3646. Applications are getting stuck some times in case of retry

[wheat9] HDFS-8421. Move startFile() and related functions into FSDirWriteFileOp. Contributed by Haohui Mai.

[xyao] HDFS-8451. DFSClient probe for encryption testing interprets empty URI property for enabled. Contributed by Steve Loughran.

[kasha] YARN-3675. FairScheduler: RM quits when node removal races with continuous-scheduling on the same node. (Anubhav Dhoot via kasha)

[jghoman] HADOOP-12016. Typo in FileSystem::listStatusIterator. Contributed by Arthur Vigil.

[vinodkv] YARN-3684. Changed ContainerExecutor's primary lifecycle methods to use a more extensible mechanism of context objects. Contributed by Sidharta Seethana.

[arp] HDFS-8454. Remove unnecessary throttling in TestDatanodeDeath. (Arpit Agarwal)

[aajisaka] HADOOP-12014. hadoop-config.cmd displays a wrong error message. Contributed by Kengo Seki.

[aajisaka] HADOOP-11955. Fix a typo in the cluster setup doc. Contributed by Yanjun Wang.

[aajisaka] HADOOP-11594. Improve the readability of site index of documentation. Contributed by Masatake Iwasaki.

[vinayakumarb] HDFS-8268. Port conflict log for data node server is not sufficient (Contributed by Mohammad Shahid Khan)

[junping_du] YARN-3594. WintuilsProcessStubExecutor.startStreamReader leaks streams. Contributed by Lars Francke.

[vinayakumarb] HADOOP-11743. maven doesn't clean all the site files (Contributed by ramtin)

------------------------------------------
[...truncated 6640 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.763 sec - in org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.374 sec - in org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.962 sec - in org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.819 sec - in org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.594 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.581 sec - in org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.71 sec - in org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.965 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.439 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.248 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.889 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.16 sec - in org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.957 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.752 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.229 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.888 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.254 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.942 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.608 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.616 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.264 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.695 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.341 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.132 sec - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.932 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.029 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.046 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 179.722 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.139 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.44 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.838 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.063 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.138 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.951 sec - in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.448 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.154 sec - in org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.134 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.14 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.574 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.424 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.495 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.495 sec - in org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.254 sec - in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.387 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.566 sec - in org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.701 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.609 sec - in org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.871 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.828 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.814 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.143 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.406 sec - in org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.253 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.241 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.885 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.82 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.022 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.019 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.577 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.112 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.844 sec - in org.apache.hadoop.security.TestRefreshUserMappings

Results :

Tests in error: 
  TestFileTruncate.testTruncateFailure » IO Failed to replace a bad datanode on ...

Tests run: 3437, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.837 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-22T14:20:00+00:00
[INFO] Final Memory: 61M/678M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363209 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675

Build failed in Jenkins: Hadoop-Hdfs-trunk #2132

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2132/changes>

Changes:

[wangda] Move YARN-2918 from 2.8.0 to 2.7.1

[xgong] YARN-3681. yarn cmd says "could not find main class 'queue'" in windows.

[jianhe] YARN-3609. Load node labels from storage inside RM serviceStart. Contributed by Wangda Tan

[jianhe] YARN-3654. ContainerLogsPage web UI should not have meta-refresh. Contributed by Xuan Gong

[wheat9] HADOOP-11772. RPC Invoker relies on static ClientCache which has synchronized(this) blocks. Contributed by Haohui Mai.

[aajisaka] HDFS-4383. Document the lease limits. Contributed by Arshad Mohammad.

[aajisaka] HADOOP-10366. Add whitespaces between classes for values in core-default.xml to fit better in browser. Contributed by kanaka kumar avvaru.

------------------------------------------
[...truncated 6221 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.798 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.576 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.349 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.953 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.643 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.208 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.17 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.277 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.641 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.885 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.299 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.192 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.286 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.865 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.868 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.527 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.486 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.078 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.196 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.548 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.774 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.19 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.12 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.029 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.778 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.066 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.118 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.786 sec - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.747 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.487 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Running org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.179 sec - in org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.876 sec - in org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.544 sec - in org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures
Running org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.695 sec - in org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.468 sec - in org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.72 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Running org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.198 sec - in org.apache.hadoop.hdfs.server.namenode.TestGetBlockLocations
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.56 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.261 sec - in org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.196 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsck
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.38 sec - in org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.339 sec - in org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Running org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.6 sec - in org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet
Running org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.193 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.608 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
Running org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.288 sec - in org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.612 sec - in org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.097 sec - in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.511 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowManager
Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.104 sec - in org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.996 sec - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.508 sec - in org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.482 sec - in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Running org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.888 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.982 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.688 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.872 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.332 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.407 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.292 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Running org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.311 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.47 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.905 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.84 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.292 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.156 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.55 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.808 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.055 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.298 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.979 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.698 sec - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions

Results :

Tests in error: 
  TestDFSZKFailoverController>ClientBaseWithFixes.setUp:409->ClientBaseWithFixes.startServer:445->ClientBaseWithFixes.createNewServerInstance:348 » Bind
  TestDFSZKFailoverController.shutdown:114 NullPointer

Tests run: 2262, Failures: 0, Errors: 2, Skipped: 13

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 49.288 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:19 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:20 h
[INFO] Finished at: 2015-05-21T12:55:57+00:00
[INFO] Final Memory: 66M/964M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5171256906604989555.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire7502973680232758651tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_905990491850262438729tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363065 bytes
Compression is 0.0%
Took 8 sec
Recording test results
Updating HDFS-4383
Updating HADOOP-10366
Updating HADOOP-11772
Updating YARN-2918
Updating YARN-3654
Updating YARN-3609
Updating YARN-3681

Hadoop-Hdfs-trunk - Build # 2132 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2132/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6414 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 49.288 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:19 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:20 h
[INFO] Finished at: 2015-05-21T12:55:57+00:00
[INFO] Final Memory: 66M/964M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5171256906604989555.jar /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire7502973680232758651tmp /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_905990491850262438729tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363065 bytes
Compression is 0.0%
Took 8 sec
Recording test results
Updating HDFS-4383
Updating HADOOP-10366
Updating HADOOP-11772
Updating YARN-2918
Updating YARN-3654
Updating YARN-3609
Updating YARN-3681
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController.testFailoverAndBackOnNNShutdown

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
	at org.apache.zookeeper.server.ServerCnxnFactory.createFactory(ServerCnxnFactory.java:126)
	at org.apache.zookeeper.server.ServerCnxnFactory.createFactory(ServerCnxnFactory.java:119)
	at org.apache.hadoop.ha.ClientBaseWithFixes.createNewServerInstance(ClientBaseWithFixes.java:348)
	at org.apache.hadoop.ha.ClientBaseWithFixes.startServer(ClientBaseWithFixes.java:445)
	at org.apache.hadoop.ha.ClientBaseWithFixes.setUp(ClientBaseWithFixes.java:409)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


REGRESSION:  org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController.testFailoverAndBackOnNNShutdown

Error Message:
null

Stack Trace:
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController.shutdown(TestDFSZKFailoverController.java:114)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2131

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2131/changes>

Changes:

[kihwal] HDFS-8131. Implement a space balanced block placement policy. Contributed by Liu Shaohui.

[xgong] YARN-3601. Fix UT TestRMFailover.testRMWebAppRedirect. Contributed by Weiwei Yang

[raviprak] YARN-3302. TestDockerContainerExecutor should run automatically if it can detect docker in the usual place (Ravindra Kumar Naik via raviprak)

[cmccabe] HADOOP-11970. Replace uses of ThreadLocal<Random> with JDK7 ThreadLocalRandom (Sean Busbey via Colin P. McCabe)

[kihwal] HDFS-8404. Pending block replication can get stuck using older genstamp. Contributed by Nathan Roberts.

[junping_du] Moving MAPREDUCE-6361 to 2.7.1 CHANGES.txt

[Arun Suresh] HADOOP-11973. Ensure ZkDelegationTokenSecretManager namespace znodes get created with ACLs. (Gregory Chanan via asuresh)

[cnauroth] HADOOP-11963. Metrics documentation for FSNamesystem misspells PendingDataNodeMessageCount. Contributed by Anu Engineer.

[jianhe] YARN-2821. Fixed a problem that DistributedShell AM may hang if restarted. Contributed by Varun Vasudev

[aw] HADOOP-12000. cannot use --java-home in test-patch (aw)

[wangda] YARN-3565. NodeHeartbeatRequest/RegisterNodeManagerRequest should use NodeLabel object instead of String. (Naganarasimha G R via wangda)

[wangda] YARN-3583. Support of NodeLabel object instead of plain String in YarnClient side. (Sunil G via wangda)

[ozawa] YARN-3677. Fix findbugs warnings in yarn-server-resourcemanager. Contributed by Vinod Kumar Vavilapalli.

[wheat9] HADOOP-11995. Make jetty version configurable from the maven command line. Contributed by Sriharsha Devineni.

[aajisaka] HADOOP-11698. Remove DistCpV1 and Logalyzer. Contributed by Brahma Reddy Battula.

------------------------------------------
[...truncated 6581 lines...]
Running org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.058 sec - in org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.458 sec - in org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Running org.apache.hadoop.hdfs.server.datanode.TestHdfsServerConstants
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in org.apache.hadoop.hdfs.server.datanode.TestHdfsServerConstants
Running org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.182 sec - in org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Running org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.49 sec - in org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Running org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.053 sec - in org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.265 sec - in org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Running org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.921 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.892 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.493 sec - in org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.427 sec - in org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.371 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestAvailableSpaceVolumeChoosingPolicy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.773 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.TestAvailableSpaceVolumeChoosingPolicy
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.487 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.214 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 131.546 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.279 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.108 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.667 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.005 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.218 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.796 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.596 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.569 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.957 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.054 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.519 sec - in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.976 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.576 sec - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.648 sec - in org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.679 sec - in org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.14 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeProtocolRetryPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.422 sec - in org.apache.hadoop.hdfs.server.datanode.TestDatanodeProtocolRetryPolicy
Running org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.46 sec - in org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
Running org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.556 sec - in org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations
Running org.apache.hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.381 sec - in org.apache.hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.401 sec - in org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.939 sec - in org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.763 sec - in org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.246 sec - in org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.08 sec - in org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.916 sec - in org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.649 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.745 sec - in org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.89 sec - in org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.996 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.431 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.299 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.478 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.035 sec - in org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.715 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.657 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.084 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.257 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.349 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.265 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.225 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.089 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.021 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.586 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.134 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.182 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.792 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.293 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.056 sec - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.837 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.037 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.044 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2

Results :

Tests in error: 
  TestHDFSConcat.startUpCluster:74 » IO Timed out waiting for Mini HDFS Cluster ...

Tests run: 3225, Failures: 0, Errors: 1, Skipped: 16

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.867 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:31 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.081 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:31 h
[INFO] Finished at: 2015-05-20T14:31:21+00:00
[INFO] Final Memory: 65M/730M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363185 bytes
Compression is 0.0%
Took 5.9 sec
Recording test results
Updating HADOOP-11698
Updating HADOOP-11973
Updating YARN-3583
Updating YARN-3601
Updating HADOOP-11995
Updating HADOOP-12000
Updating HADOOP-11970
Updating YARN-3565
Updating YARN-2821
Updating MAPREDUCE-6361
Updating HDFS-8404
Updating YARN-3302
Updating HDFS-8131
Updating YARN-3677
Updating HADOOP-11963

Hadoop-Hdfs-trunk - Build # 2131 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2131/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6774 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.867 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:31 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.081 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:31 h
[INFO] Finished at: 2015-05-20T14:31:21+00:00
[INFO] Final Memory: 65M/730M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363185 bytes
Compression is 0.0%
Took 5.9 sec
Recording test results
Updating HADOOP-11698
Updating HADOOP-11973
Updating YARN-3583
Updating YARN-3601
Updating HADOOP-11995
Updating HADOOP-12000
Updating HADOOP-11970
Updating YARN-3565
Updating YARN-2821
Updating MAPREDUCE-6361
Updating HDFS-8404
Updating YARN-3302
Updating HDFS-8131
Updating YARN-3677
Updating HADOOP-11963
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat.testConcat

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1206)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:471)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:430)
	at org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat.startUpCluster(TestHDFSConcat.java:74)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2130

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2130/changes>

Changes:

[umamahesh] HDFS-8412. Fix the test failures in HTTPFS: In some tests setReplication called after fs close. Contributed by Uma Maheswara Rao G.

[aw] HADOOP-11884. test-patch.sh should pull the real findbugs version  (Kengo Seki via aw)

[aw] HADOOP-11944. add option to test-patch to avoid relocating patch process directory (Sean Busbey via aw)

[aw] HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)

[arp] HDFS-8345. Storage policy APIs must be exposed via the FileSystem interface. (Arpit Agarwal)

[szetszwo] HDFS-8405. Fix a typo in NamenodeFsck.  Contributed by Takanobu Asanuma

[raviprak] HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)

[xgong] YARN-3541. Add version info on timeline service / generic history web UI and REST API. Contributed by Zhijie Shen

[jing9] HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich Haase.

[vinayakumarb] HDFS-6348. SecondaryNameNode not terminating properly on runtime exceptions (Contributed by Rakesh R)

[aajisaka] HADOOP-10971. Add -C flag to make `hadoop fs -ls` print filenames only. Contributed by Kengo Seki.

[aajisaka] Move HADOOP-8934 in CHANGES.txt from 3.0.0 to 2.8.0.

[vinayakumarb] HADOOP-11103. Clean up RemoteException (Contributed by Sean Busbey)

[aajisaka] Move HADOOP-11581 in CHANGES.txt from 3.0.0 to 2.8.0.

------------------------------------------
[...truncated 7880 lines...]
     [exec] 2015-05-19 14:19:14,944 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-19 14:19:14,946 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-19 14:19:14,956 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:55935 starting to offer service
     [exec] 2015-05-19 14:19:14,962 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-19 14:19:14,962 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 45467: starting
     [exec] 2015-05-19 14:19:15,393 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 5448@asf905.gq1.ygridcore.net
     [exec] 2015-05-19 14:19:15,393 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,394 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-19 14:19:15,435 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,436 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1594271650-67.195.81.149-1432045153057>
     [exec] 2015-05-19 14:19:15,436 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1594271650-67.195.81.149-1432045153057> is not formatted for BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,436 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-19 14:19:15,436 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1594271650-67.195.81.149-1432045153057 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1594271650-67.195.81.149-1432045153057/current>
     [exec] 2015-05-19 14:19:15,439 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 5448@asf905.gq1.ygridcore.net
     [exec] 2015-05-19 14:19:15,439 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,439 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-19 14:19:15,474 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,474 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1594271650-67.195.81.149-1432045153057>
     [exec] 2015-05-19 14:19:15,474 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1594271650-67.195.81.149-1432045153057> is not formatted for BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,474 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-19 14:19:15,474 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1594271650-67.195.81.149-1432045153057 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1594271650-67.195.81.149-1432045153057/current>
     [exec] 2015-05-19 14:19:15,476 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=1290721825;bpid=BP-1594271650-67.195.81.149-1432045153057;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=1290721825;c=0;bpid=BP-1594271650-67.195.81.149-1432045153057;dnuuid=null
     [exec] 2015-05-19 14:19:15,477 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 15a01f84-c6d9-4b5d-9c12-ee1585476e28
     [exec] 2015-05-19 14:19:15,499 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-cb09550a-a3fc-4135-a844-b6c961ffeccd
     [exec] 2015-05-19 14:19:15,500 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-19 14:19:15,500 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-d9942059-720b-44f0-8d56-29eca92aa2d4
     [exec] 2015-05-19 14:19:15,500 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-19 14:19:15,503 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-19 14:19:15,510 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan starting at 1432050790510 with interval 21600000
     [exec] 2015-05-19 14:19:15,510 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,511 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1594271650-67.195.81.149-1432045153057 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-19 14:19:15,513 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1594271650-67.195.81.149-1432045153057 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-19 14:19:15,528 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1594271650-67.195.81.149-1432045153057 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 17ms
     [exec] 2015-05-19 14:19:15,528 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1594271650-67.195.81.149-1432045153057 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 15ms
     [exec] 2015-05-19 14:19:15,528 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-1594271650-67.195.81.149-1432045153057: 17ms
     [exec] 2015-05-19 14:19:15,529 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1594271650-67.195.81.149-1432045153057 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-19 14:19:15,529 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1594271650-67.195.81.149-1432045153057 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-19 14:19:15,529 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1594271650-67.195.81.149-1432045153057/current/replicas> doesn't exist 
     [exec] 2015-05-19 14:19:15,530 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1594271650-67.195.81.149-1432045153057/current/replicas> doesn't exist 
     [exec] 2015-05-19 14:19:15,530 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1594271650-67.195.81.149-1432045153057 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 1ms
     [exec] 2015-05-19 14:19:15,530 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1594271650-67.195.81.149-1432045153057 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 1ms
     [exec] 2015-05-19 14:19:15,531 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 3ms
     [exec] 2015-05-19 14:19:15,533 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28) service to localhost/127.0.0.1:55935 beginning handshake with NN
     [exec] 2015-05-19 14:19:15,535 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-19 14:19:15,535 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-19 14:19:15,544 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:36339, datanodeUuid=15a01f84-c6d9-4b5d-9c12-ee1585476e28, infoPort=41796, infoSecurePort=0, ipcPort=45467, storageInfo=lv=-56;cid=testClusterID;nsid=1290721825;c=0) storage 15a01f84-c6d9-4b5d-9c12-ee1585476e28
     [exec] 2015-05-19 14:19:15,544 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-19 14:19:15,545 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:36339
     [exec] 2015-05-19 14:19:15,550 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28) service to localhost/127.0.0.1:55935 successfully registered with NN
     [exec] 2015-05-19 14:19:15,551 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:55935 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-19 14:19:15,563 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-19 14:19:15,563 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-cb09550a-a3fc-4135-a844-b6c961ffeccd for DN 127.0.0.1:36339
     [exec] 2015-05-19 14:19:15,565 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-d9942059-720b-44f0-8d56-29eca92aa2d4 for DN 127.0.0.1:36339
     [exec] 2015-05-19 14:19:15,575 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28) service to localhost/127.0.0.1:55935 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-19 14:19:15,575 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28) service to localhost/127.0.0.1:55935
     [exec] 2015-05-19 14:19:15,589 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-d9942059-720b-44f0-8d56-29eca92aa2d4 from datanode 15a01f84-c6d9-4b5d-9c12-ee1585476e28
     [exec] 2015-05-19 14:19:15,589 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-d9942059-720b-44f0-8d56-29eca92aa2d4 node DatanodeRegistration(127.0.0.1:36339, datanodeUuid=15a01f84-c6d9-4b5d-9c12-ee1585476e28, infoPort=41796, infoSecurePort=0, ipcPort=45467, storageInfo=lv=-56;cid=testClusterID;nsid=1290721825;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-19 14:19:15,590 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-cb09550a-a3fc-4135-a844-b6c961ffeccd from datanode 15a01f84-c6d9-4b5d-9c12-ee1585476e28
     [exec] 2015-05-19 14:19:15,590 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-cb09550a-a3fc-4135-a844-b6c961ffeccd node DatanodeRegistration(127.0.0.1:36339, datanodeUuid=15a01f84-c6d9-4b5d-9c12-ee1585476e28, infoPort=41796, infoSecurePort=0, ipcPort=45467, storageInfo=lv=-56;cid=testClusterID;nsid=1290721825;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-19 14:19:15,606 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xca03281a3c6da9d6,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 27 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-19 14:19:15,606 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,645 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-19 14:19:15,654 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-19 14:19:15,654 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-19 14:19:15,654 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-19 14:19:15,655 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-19 14:19:15,656 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-19 14:19:15,768 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 45467
     [exec] 2015-05-19 14:19:15,769 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 45467
     [exec] 2015-05-19 14:19:15,769 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28) service to localhost/127.0.0.1:55935 interrupted
     [exec] 2015-05-19 14:19:15,769 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-19 14:19:15,769 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28) service to localhost/127.0.0.1:55935
     [exec] 2015-05-19 14:19:15,870 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1594271650-67.195.81.149-1432045153057 (Datanode Uuid 15a01f84-c6d9-4b5d-9c12-ee1585476e28)
     [exec] 2015-05-19 14:19:15,871 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-1594271650-67.195.81.149-1432045153057
     [exec] 2015-05-19 14:19:15,872 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-19 14:19:15,873 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-19 14:19:15,873 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-19 14:19:15,873 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-19 14:19:15,878 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-19 14:19:15,879 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-19 14:19:15,879 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-19 14:19:15,879 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-19 14:19:15,880 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 1 
     [exec] 2015-05-19 14:19:15,880 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-19 14:19:15,881 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-19 14:19:15,883 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-19 14:19:15,885 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 55935
     [exec] 2015-05-19 14:19:15,886 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 55935
     [exec] 2015-05-19 14:19:15,886 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-19 14:19:15,886 INFO  blockmanagement.BlockManager (BlockManager.java:run(3686)) - Stopping ReplicationMonitor.
     [exec] 2015-05-19 14:19:15,914 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-19 14:19:15,915 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state
     [exec] 2015-05-19 14:19:15,916 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-19 14:19:16,016 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-19 14:19:16,018 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-19 14:19:16,018 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.824 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:46 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:47 h
[INFO] Finished at: 2015-05-19T14:21:36+00:00
[INFO] Final Memory: 54M/696M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362791 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HADOOP-11581
Updating HADOOP-11949
Updating HADOOP-11944
Updating YARN-3541
Updating HADOOP-1540
Updating HADOOP-8934
Updating HADOOP-10971
Updating HADOOP-11103
Updating HADOOP-11884
Updating HDFS-8345
Updating HDFS-8405
Updating HDFS-8412
Updating HDFS-6348
Updating HDFS-4185

Build failed in Jenkins: Hadoop-Hdfs-trunk #2129

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2129/changes>

Changes:

[aajisaka] HADOOP-11939. Deprecate DistCpV1 and Logalyzer. Contributed by Brahma Reddy Battula.

[aajisaka] HADOOP-10582. Fix the test case for copying to non-existent dir in TestFsShellCopy. Contributed by Kousuke Saruta.

[umamahesh] Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk

------------------------------------------
[...truncated 7870 lines...]
     [exec] 2015-05-18 14:18:42,206 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-18 14:18:42,206 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-18 14:18:42,209 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 50757
     [exec] 2015-05-18 14:18:42,209 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-18 14:18:42,262 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:50757
     [exec] 2015-05-18 14:18:42,389 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(150)) - Listening HTTP traffic on /127.0.0.1:57157
     [exec] 2015-05-18 14:18:42,390 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-18 14:18:42,390 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-18 14:18:42,404 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-18 14:18:42,405 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 40656
     [exec] 2015-05-18 14:18:42,411 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:40656
     [exec] 2015-05-18 14:18:42,423 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-18 14:18:42,425 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-18 14:18:42,435 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:44732 starting to offer service
     [exec] 2015-05-18 14:18:42,441 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-18 14:18:42,441 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 40656: starting
     [exec] 2015-05-18 14:18:42,876 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 5140@asf905.gq1.ygridcore.net
     [exec] 2015-05-18 14:18:42,876 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,876 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-18 14:18:42,918 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,918 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-232390966-67.195.81.149-1431958720546>
     [exec] 2015-05-18 14:18:42,918 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-232390966-67.195.81.149-1431958720546> is not formatted for BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,918 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-18 14:18:42,919 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-232390966-67.195.81.149-1431958720546 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-232390966-67.195.81.149-1431958720546/current>
     [exec] 2015-05-18 14:18:42,921 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 5140@asf905.gq1.ygridcore.net
     [exec] 2015-05-18 14:18:42,921 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,921 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-18 14:18:42,955 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,955 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-232390966-67.195.81.149-1431958720546>
     [exec] 2015-05-18 14:18:42,955 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-232390966-67.195.81.149-1431958720546> is not formatted for BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,955 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-18 14:18:42,955 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-232390966-67.195.81.149-1431958720546 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-232390966-67.195.81.149-1431958720546/current>
     [exec] 2015-05-18 14:18:42,956 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=532102845;bpid=BP-232390966-67.195.81.149-1431958720546;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=532102845;c=0;bpid=BP-232390966-67.195.81.149-1431958720546;dnuuid=null
     [exec] 2015-05-18 14:18:42,958 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID 4f5a7d77-ff8d-40f9-9943-567bd5612e6f
     [exec] 2015-05-18 14:18:42,983 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-031ed32c-d9d5-42f5-872a-4b0458e65b75
     [exec] 2015-05-18 14:18:42,984 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-18 14:18:42,984 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-fc3f5733-4fc0-4485-bdab-a995a3e90c58
     [exec] 2015-05-18 14:18:42,984 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-18 14:18:42,988 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-18 14:18:42,998 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan starting at 1431964584998 with interval 21600000
     [exec] 2015-05-18 14:18:42,999 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:42,999 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-232390966-67.195.81.149-1431958720546 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-18 14:18:43,001 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-232390966-67.195.81.149-1431958720546 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-18 14:18:43,010 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-232390966-67.195.81.149-1431958720546 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 10ms
     [exec] 2015-05-18 14:18:43,010 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-232390966-67.195.81.149-1431958720546 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 9ms
     [exec] 2015-05-18 14:18:43,010 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-232390966-67.195.81.149-1431958720546: 12ms
     [exec] 2015-05-18 14:18:43,011 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-232390966-67.195.81.149-1431958720546 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-18 14:18:43,011 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-232390966-67.195.81.149-1431958720546/current/replicas> doesn't exist 
     [exec] 2015-05-18 14:18:43,012 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-232390966-67.195.81.149-1431958720546 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-18 14:18:43,012 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-232390966-67.195.81.149-1431958720546 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 1ms
     [exec] 2015-05-18 14:18:43,012 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-232390966-67.195.81.149-1431958720546/current/replicas> doesn't exist 
     [exec] 2015-05-18 14:18:43,012 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-232390966-67.195.81.149-1431958720546 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-18 14:18:43,012 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 1ms
     [exec] 2015-05-18 14:18:43,014 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f) service to localhost/127.0.0.1:44732 beginning handshake with NN
     [exec] 2015-05-18 14:18:43,025 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-18 14:18:43,025 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-18 14:18:43,026 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:58680, datanodeUuid=4f5a7d77-ff8d-40f9-9943-567bd5612e6f, infoPort=57157, infoSecurePort=0, ipcPort=40656, storageInfo=lv=-56;cid=testClusterID;nsid=532102845;c=0) storage 4f5a7d77-ff8d-40f9-9943-567bd5612e6f
     [exec] 2015-05-18 14:18:43,026 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-18 14:18:43,027 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:58680
     [exec] 2015-05-18 14:18:43,031 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f) service to localhost/127.0.0.1:44732 successfully registered with NN
     [exec] 2015-05-18 14:18:43,031 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:44732 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-18 14:18:43,042 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-18 14:18:43,042 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-031ed32c-d9d5-42f5-872a-4b0458e65b75 for DN 127.0.0.1:58680
     [exec] 2015-05-18 14:18:43,043 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-fc3f5733-4fc0-4485-bdab-a995a3e90c58 for DN 127.0.0.1:58680
     [exec] 2015-05-18 14:18:43,054 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f) service to localhost/127.0.0.1:44732 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-18 14:18:43,054 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f) service to localhost/127.0.0.1:44732
     [exec] 2015-05-18 14:18:43,067 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-031ed32c-d9d5-42f5-872a-4b0458e65b75 from datanode 4f5a7d77-ff8d-40f9-9943-567bd5612e6f
     [exec] 2015-05-18 14:18:43,068 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-031ed32c-d9d5-42f5-872a-4b0458e65b75 node DatanodeRegistration(127.0.0.1:58680, datanodeUuid=4f5a7d77-ff8d-40f9-9943-567bd5612e6f, infoPort=57157, infoSecurePort=0, ipcPort=40656, storageInfo=lv=-56;cid=testClusterID;nsid=532102845;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs
     [exec] 2015-05-18 14:18:43,069 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-fc3f5733-4fc0-4485-bdab-a995a3e90c58 from datanode 4f5a7d77-ff8d-40f9-9943-567bd5612e6f
     [exec] 2015-05-18 14:18:43,069 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-fc3f5733-4fc0-4485-bdab-a995a3e90c58 node DatanodeRegistration(127.0.0.1:58680, datanodeUuid=4f5a7d77-ff8d-40f9-9943-567bd5612e6f, infoPort=57157, infoSecurePort=0, ipcPort=40656, storageInfo=lv=-56;cid=testClusterID;nsid=532102845;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-18 14:18:43,085 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xffc6c56a7e038e9d,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 28 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-18 14:18:43,085 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:43,134 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-18 14:18:43,143 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-18 14:18:43,143 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-18 14:18:43,143 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-18 14:18:43,144 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-18 14:18:43,145 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-18 14:18:43,257 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 40656
     [exec] 2015-05-18 14:18:43,258 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 40656
     [exec] 2015-05-18 14:18:43,258 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f) service to localhost/127.0.0.1:44732 interrupted
     [exec] 2015-05-18 14:18:43,258 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-18 14:18:43,258 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f) service to localhost/127.0.0.1:44732
     [exec] 2015-05-18 14:18:43,360 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-232390966-67.195.81.149-1431958720546 (Datanode Uuid 4f5a7d77-ff8d-40f9-9943-567bd5612e6f)
     [exec] 2015-05-18 14:18:43,360 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-232390966-67.195.81.149-1431958720546
     [exec] 2015-05-18 14:18:43,361 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-18 14:18:43,361 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-18 14:18:43,362 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-18 14:18:43,362 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-18 14:18:43,367 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-18 14:18:43,367 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-18 14:18:43,368 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-18 14:18:43,368 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-18 14:18:43,369 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 2 
     [exec] 2015-05-18 14:18:43,369 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-18 14:18:43,370 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-18 14:18:43,371 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-18 14:18:43,372 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 44732
     [exec] 2015-05-18 14:18:43,373 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 44732
     [exec] 2015-05-18 14:18:43,373 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-18 14:18:43,373 INFO  blockmanagement.BlockManager (BlockManager.java:run(3686)) - Stopping ReplicationMonitor.
     [exec] 2015-05-18 14:18:43,405 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-18 14:18:43,406 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state
     [exec] 2015-05-18 14:18:43,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-18 14:18:43,508 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-18 14:18:43,509 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-18 14:18:43,509 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.737 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-18T14:21:06+00:00
[INFO] Final Memory: 54M/698M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362686 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HADOOP-11939
Updating HADOOP-10582
Updating HDFS-8332

Build failed in Jenkins: Hadoop-Hdfs-trunk #2128

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2128/changes>

Changes:

[arp] HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal)

[aajisaka] HADOOP-11988. Fix typo in the document for hadoop fs -find. Contributed by Kengo Seki.

------------------------------------------
[...truncated 7869 lines...]
     [exec] 2015-05-17 14:17:24,109 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(678)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
     [exec] 2015-05-17 14:17:24,110 INFO  http.HttpServer2 (HttpServer2.java:addFilter(653)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
     [exec] 2015-05-17 14:17:24,111 INFO  http.HttpServer2 (HttpServer2.java:addFilter(661)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     [exec] 2015-05-17 14:17:24,113 INFO  http.HttpServer2 (HttpServer2.java:openListeners(883)) - Jetty bound to port 43065
     [exec] 2015-05-17 14:17:24,113 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2015-05-17 14:17:24,168 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:43065
     [exec] 2015-05-17 14:17:24,294 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(150)) - Listening HTTP traffic on /127.0.0.1:36594
     [exec] 2015-05-17 14:17:24,296 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-17 14:17:24,296 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-17 14:17:24,309 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-17 14:17:24,310 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 41644
     [exec] 2015-05-17 14:17:24,317 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:41644
     [exec] 2015-05-17 14:17:24,329 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-17 14:17:24,331 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-17 14:17:24,341 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:52051 starting to offer service
     [exec] 2015-05-17 14:17:24,347 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-17 14:17:24,348 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 41644: starting
     [exec] 2015-05-17 14:17:24,574 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 32184@asf909.gq1.ygridcore.net
     [exec] 2015-05-17 14:17:24,574 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,574 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-17 14:17:24,614 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,614 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456>
     [exec] 2015-05-17 14:17:24,615 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456> is not formatted for BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,615 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-17 14:17:24,615 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1054975985-67.195.81.153-1431872242456 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456/current>
     [exec] 2015-05-17 14:17:24,617 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 32184@asf909.gq1.ygridcore.net
     [exec] 2015-05-17 14:17:24,617 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,617 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-17 14:17:24,653 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,653 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456>
     [exec] 2015-05-17 14:17:24,653 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456> is not formatted for BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,653 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-17 14:17:24,654 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-1054975985-67.195.81.153-1431872242456 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456/current>
     [exec] 2015-05-17 14:17:24,655 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=629540541;bpid=BP-1054975985-67.195.81.153-1431872242456;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=629540541;c=0;bpid=BP-1054975985-67.195.81.153-1431872242456;dnuuid=null
     [exec] 2015-05-17 14:17:24,657 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID d2fa2be2-3d17-418d-b02a-270032114a57
     [exec] 2015-05-17 14:17:24,678 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2
     [exec] 2015-05-17 14:17:24,678 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-17 14:17:24,679 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-5881e9a8-64f4-434a-b980-ed2a486ecf63
     [exec] 2015-05-17 14:17:24,679 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(403)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-17 14:17:24,682 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
     [exec] 2015-05-17 14:17:24,688 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan starting at 1431878455688 with interval 21600000
     [exec] 2015-05-17 14:17:24,689 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,690 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-17 14:17:24,690 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-17 14:17:24,702 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1054975985-67.195.81.153-1431872242456 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 12ms
     [exec] 2015-05-17 14:17:24,702 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-1054975985-67.195.81.153-1431872242456 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 12ms
     [exec] 2015-05-17 14:17:24,703 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-1054975985-67.195.81.153-1431872242456: 13ms
     [exec] 2015-05-17 14:17:24,703 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-17 14:17:24,704 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1054975985-67.195.81.153-1431872242456/current/replicas> doesn't exist 
     [exec] 2015-05-17 14:17:24,704 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-17 14:17:24,704 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-17 14:17:24,704 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1054975985-67.195.81.153-1431872242456/current/replicas> doesn't exist 
     [exec] 2015-05-17 14:17:24,704 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-1054975985-67.195.81.153-1431872242456 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 0ms
     [exec] 2015-05-17 14:17:24,704 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-17 14:17:24,706 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 beginning handshake with NN
     [exec] 2015-05-17 14:17:24,714 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:41243, datanodeUuid=d2fa2be2-3d17-418d-b02a-270032114a57, infoPort=36594, infoSecurePort=0, ipcPort=41644, storageInfo=lv=-56;cid=testClusterID;nsid=629540541;c=0) storage d2fa2be2-3d17-418d-b02a-270032114a57
     [exec] 2015-05-17 14:17:24,714 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-17 14:17:24,715 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:41243
     [exec] 2015-05-17 14:17:24,720 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 successfully registered with NN
     [exec] 2015-05-17 14:17:24,720 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:52051 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-17 14:17:24,726 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2332)) - No heartbeat from DataNode: 127.0.0.1:41243
     [exec] 2015-05-17 14:17:24,727 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-17 14:17:24,736 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-17 14:17:24,736 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 for DN 127.0.0.1:41243
     [exec] 2015-05-17 14:17:24,738 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 for DN 127.0.0.1:41243
     [exec] 2015-05-17 14:17:24,748 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-17 14:17:24,749 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051
     [exec] 2015-05-17 14:17:24,762 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 from datanode d2fa2be2-3d17-418d-b02a-270032114a57
     [exec] 2015-05-17 14:17:24,763 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-5881e9a8-64f4-434a-b980-ed2a486ecf63 node DatanodeRegistration(127.0.0.1:41243, datanodeUuid=d2fa2be2-3d17-418d-b02a-270032114a57, infoPort=36594, infoSecurePort=0, ipcPort=41644, storageInfo=lv=-56;cid=testClusterID;nsid=629540541;c=0), blocks: 0, hasStaleStorage: true, processing time: 0 msecs
     [exec] 2015-05-17 14:17:24,763 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 from datanode d2fa2be2-3d17-418d-b02a-270032114a57
     [exec] 2015-05-17 14:17:24,763 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-88d3336e-9e69-4f45-80b6-af74e44e5ff2 node DatanodeRegistration(127.0.0.1:41243, datanodeUuid=d2fa2be2-3d17-418d-b02a-270032114a57, infoPort=36594, infoSecurePort=0, ipcPort=41644, storageInfo=lv=-56;cid=testClusterID;nsid=629540541;c=0), blocks: 0, hasStaleStorage: false, processing time: 0 msecs
     [exec] 2015-05-17 14:17:24,780 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0x502b851717ac4bb,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 4 msec to generate and 28 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-17 14:17:24,781 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:24,832 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-17 14:17:24,843 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-17 14:17:24,843 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-17 14:17:24,843 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-17 14:17:24,843 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-17 14:17:24,845 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-17 14:17:25,213 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 41644
     [exec] 2015-05-17 14:17:25,214 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 41644
     [exec] 2015-05-17 14:17:25,215 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051 interrupted
     [exec] 2015-05-17 14:17:25,215 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-17 14:17:25,215 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57) service to localhost/127.0.0.1:52051
     [exec] 2015-05-17 14:17:25,317 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1054975985-67.195.81.153-1431872242456 (Datanode Uuid d2fa2be2-3d17-418d-b02a-270032114a57)
     [exec] 2015-05-17 14:17:25,317 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool BP-1054975985-67.195.81.153-1431872242456
     [exec] 2015-05-17 14:17:25,319 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk service threads
     [exec] 2015-05-17 14:17:25,319 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads have been shut down
     [exec] 2015-05-17 14:17:25,319 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-17 14:17:25,320 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-17 14:17:25,325 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-17 14:17:25,325 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-17 14:17:25,326 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-17 14:17:25,326 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-17 14:17:25,326 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 1 
     [exec] 2015-05-17 14:17:25,327 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-17 14:17:25,328 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-17 14:17:25,329 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-17 14:17:25,331 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 52051
     [exec] 2015-05-17 14:17:25,333 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 52051
     [exec] 2015-05-17 14:17:25,333 INFO  blockmanagement.BlockManager (BlockManager.java:run(3686)) - Stopping ReplicationMonitor.
     [exec] 2015-05-17 14:17:25,334 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-17 14:17:25,368 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-17 14:17:25,368 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state
     [exec] 2015-05-17 14:17:25,370 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-17 14:17:25,471 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-17 14:17:25,472 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-17 14:17:25,472 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.224 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-17T14:19:46+00:00
[INFO] Final Memory: 67M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362788 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-8157
Updating HADOOP-11988

Hadoop-Hdfs-trunk - Build # 2128 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2128/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8062 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.224 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-17T14:19:46+00:00
[INFO] Final Memory: 67M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs does not exist.
[ERROR] around Ant part ...<copy todir="/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">... @ 5:121 in /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362788 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-8157
Updating HADOOP-11988
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2127

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2127/changes>

Changes:

[junping_du] YARN-3505 addendum: fix an issue in previous patch.

[jlowe] YARN-2421. RM still allocates containers to an app in the FINISHING state. Contributed by Chang Li

[jing9] HDFS-8397. Refactor the error handling code in DataStreamer. Contributed by Tsz Wo Nicholas Sze.

[wheat9] HDFS-8394. Move getAdditionalBlock() and related functionalities into a separate class. Contributed by Haohui Mai.

[wheat9] HDFS-8403. Eliminate retries in TestFileCreation#testOverwriteOpenForWrite. Contributed by Arpit Agarwal.

[xgong] YARN-3526. ApplicationMaster tracking URL is incorrectly redirected on a QJM cluster. Contributed by Weiwei Yang

------------------------------------------
[...truncated 7874 lines...]
     [exec] 2015-05-16 14:17:29,090 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:53879
     [exec] 2015-05-16 14:17:29,219 INFO  web.DatanodeHttpServer (DatanodeHttpServer.java:start(150)) - Listening HTTP traffic on /127.0.0.1:58050
     [exec] 2015-05-16 14:17:29,221 INFO  datanode.DataNode (DataNode.java:startDataNode(1144)) - dnUserName = jenkins
     [exec] 2015-05-16 14:17:29,221 INFO  datanode.DataNode (DataNode.java:startDataNode(1145)) - supergroup = supergroup
     [exec] 2015-05-16 14:17:29,234 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
     [exec] 2015-05-16 14:17:29,235 INFO  ipc.Server (Server.java:run(622)) - Starting Socket Reader #1 for port 45799
     [exec] 2015-05-16 14:17:29,242 INFO  datanode.DataNode (DataNode.java:initIpcServer(844)) - Opened IPC server at /127.0.0.1:45799
     [exec] 2015-05-16 14:17:29,254 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
     [exec] 2015-05-16 14:17:29,256 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for nameservices: <default>
     [exec] 2015-05-16 14:17:29,267 INFO  datanode.DataNode (BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:36592 starting to offer service
     [exec] 2015-05-16 14:17:29,273 INFO  ipc.Server (Server.java:run(852)) - IPC Server Responder: starting
     [exec] 2015-05-16 14:17:29,274 INFO  ipc.Server (Server.java:run(692)) - IPC Server listener on 45799: starting
     [exec] 2015-05-16 14:17:29,487 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock> acquired by nodename 21557@asf904.gq1.ygridcore.net
     [exec] 2015-05-16 14:17:29,488 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1> is not formatted for BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,488 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-16 14:17:29,529 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,529 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2053080698-67.195.81.148-1431785847283>
     [exec] 2015-05-16 14:17:29,529 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2053080698-67.195.81.148-1431785847283> is not formatted for BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,530 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-16 14:17:29,530 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-2053080698-67.195.81.148-1431785847283 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2053080698-67.195.81.148-1431785847283/current>
     [exec] 2015-05-16 14:17:29,532 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock> acquired by nodename 21557@asf904.gq1.ygridcore.net
     [exec] 2015-05-16 14:17:29,532 INFO  common.Storage (DataStorage.java:loadStorageDirectory(272)) - Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2> is not formatted for BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,532 INFO  common.Storage (DataStorage.java:loadStorageDirectory(274)) - Formatting ...
     [exec] 2015-05-16 14:17:29,568 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,568 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2053080698-67.195.81.148-1431785847283>
     [exec] 2015-05-16 14:17:29,568 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2053080698-67.195.81.148-1431785847283> is not formatted for BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,568 INFO  common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
     [exec] 2015-05-16 14:17:29,568 INFO  common.Storage (BlockPoolSliceStorage.java:format(267)) - Formatting block pool BP-2053080698-67.195.81.148-1431785847283 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2053080698-67.195.81.148-1431785847283/current>
     [exec] 2015-05-16 14:17:29,570 INFO  datanode.DataNode (DataNode.java:initStorage(1405)) - Setting up storage: nsid=1653879258;bpid=BP-2053080698-67.195.81.148-1431785847283;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=1653879258;c=0;bpid=BP-2053080698-67.195.81.148-1431785847283;dnuuid=null
     [exec] 2015-05-16 14:17:29,571 INFO  datanode.DataNode (DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode UUID ce2c4b2a-7b5f-4550-b146-860f7611541b
     [exec] 2015-05-16 14:17:29,593 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-ba22ea50-06a8-4d05-8de8-b3c108493729
     [exec] 2015-05-16 14:17:29,593 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(393)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,> StorageType: DISK
     [exec] 2015-05-16 14:17:29,593 INFO  impl.FsDatasetImpl (FsVolumeList.java:addVolume(305)) - Added new volume: DS-5ba16e0d-5a9c-4283-baad-912081da0c8a
     [exec] 2015-05-16 14:17:29,594 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(393)) - Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,> StorageType: DISK
     [exec] 2015-05-16 14:17:29,598 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2060)) - Registered FSDatasetState MBean
     [exec] 2015-05-16 14:17:29,605 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan starting at 1431806550605 with interval 21600000
     [exec] 2015-05-16 14:17:29,605 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2508)) - Adding block pool BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,606 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-2053080698-67.195.81.148-1431785847283 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-16 14:17:29,607 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(404)) - Scanning block pool BP-2053080698-67.195.81.148-1431785847283 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-16 14:17:29,619 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-2053080698-67.195.81.148-1431785847283 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 11ms
     [exec] 2015-05-16 14:17:29,619 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Time taken to scan block pool BP-2053080698-67.195.81.148-1431785847283 on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 12ms
     [exec] 2015-05-16 14:17:29,619 INFO  impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for block pool BP-2053080698-67.195.81.148-1431785847283: 14ms
     [exec] 2015-05-16 14:17:29,620 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-2053080698-67.195.81.148-1431785847283 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
     [exec] 2015-05-16 14:17:29,620 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-2053080698-67.195.81.148-1431785847283/current/replicas> doesn't exist 
     [exec] 2015-05-16 14:17:29,620 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-2053080698-67.195.81.148-1431785847283 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>: 0ms
     [exec] 2015-05-16 14:17:29,620 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(190)) - Adding replicas to map for block pool BP-2053080698-67.195.81.148-1431785847283 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
     [exec] 2015-05-16 14:17:29,621 INFO  impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-2053080698-67.195.81.148-1431785847283/current/replicas> doesn't exist 
     [exec] 2015-05-16 14:17:29,621 INFO  impl.FsDatasetImpl (FsVolumeList.java:run(195)) - Time to add replicas to map for block pool BP-2053080698-67.195.81.148-1431785847283 on volume <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>: 1ms
     [exec] 2015-05-16 14:17:29,621 INFO  impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to map: 2ms
     [exec] 2015-05-16 14:17:29,623 INFO  datanode.DataNode (BPServiceActor.java:register(746)) - Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b) service to localhost/127.0.0.1:36592 beginning handshake with NN
     [exec] 2015-05-16 14:17:29,628 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
     [exec] 2015-05-16 14:17:29,629 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
     [exec] 2015-05-16 14:17:29,634 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:33085, datanodeUuid=ce2c4b2a-7b5f-4550-b146-860f7611541b, infoPort=58050, infoSecurePort=0, ipcPort=45799, storageInfo=lv=-56;cid=testClusterID;nsid=1653879258;c=0) storage ce2c4b2a-7b5f-4550-b146-860f7611541b
     [exec] 2015-05-16 14:17:29,635 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-16 14:17:29,636 INFO  net.NetworkTopology (NetworkTopology.java:add(418)) - Adding a new node: /default-rack/127.0.0.1:33085
     [exec] 2015-05-16 14:17:29,641 INFO  datanode.DataNode (BPServiceActor.java:register(764)) - Block pool Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b) service to localhost/127.0.0.1:36592 successfully registered with NN
     [exec] 2015-05-16 14:17:29,641 INFO  datanode.DataNode (BPServiceActor.java:offerService(625)) - For namenode localhost/127.0.0.1:36592 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2015-05-16 14:17:29,652 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage changes from 0 to 0
     [exec] 2015-05-16 14:17:29,652 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-ba22ea50-06a8-4d05-8de8-b3c108493729 for DN 127.0.0.1:33085
     [exec] 2015-05-16 14:17:29,653 INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID DS-5ba16e0d-5a9c-4283-baad-912081da0c8a for DN 127.0.0.1:33085
     [exec] 2015-05-16 14:17:29,662 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b) service to localhost/127.0.0.1:36592 trying to claim ACTIVE state with txid=1
     [exec] 2015-05-16 14:17:29,662 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging ACTIVE Namenode Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b) service to localhost/127.0.0.1:36592
     [exec] 2015-05-16 14:17:29,674 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-ba22ea50-06a8-4d05-8de8-b3c108493729 from datanode ce2c4b2a-7b5f-4550-b146-860f7611541b
     [exec] 2015-05-16 14:17:29,674 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-ba22ea50-06a8-4d05-8de8-b3c108493729 node DatanodeRegistration(127.0.0.1:33085, datanodeUuid=ce2c4b2a-7b5f-4550-b146-860f7611541b, infoPort=58050, infoSecurePort=0, ipcPort=45799, storageInfo=lv=-56;cid=testClusterID;nsid=1653879258;c=0), blocks: 0, hasStaleStorage: true, processing time: 1 msecs
     [exec] 2015-05-16 14:17:29,674 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1816)) - Processing first storage report for DS-5ba16e0d-5a9c-4283-baad-912081da0c8a from datanode ce2c4b2a-7b5f-4550-b146-860f7611541b
     [exec] 2015-05-16 14:17:29,675 INFO  BlockStateChange (BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage DS-5ba16e0d-5a9c-4283-baad-912081da0c8a node DatanodeRegistration(127.0.0.1:33085, datanodeUuid=ce2c4b2a-7b5f-4550-b146-860f7611541b, infoPort=58050, infoSecurePort=0, ipcPort=45799, storageInfo=lv=-56;cid=testClusterID;nsid=1653879258;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs
     [exec] 2015-05-16 14:17:29,691 INFO  datanode.DataNode (BPServiceActor.java:blockReport(490)) - Successfully sent block report 0xe694a77a61df32c3,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and 26 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
     [exec] 2015-05-16 14:17:29,691 INFO  datanode.DataNode (BPOfferService.java:processCommandFromActive(693)) - Got finalize command for block pool BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:29,738 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2299)) - Cluster is active
     [exec] 2015-05-16 14:17:29,747 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
     [exec] 2015-05-16 14:17:29,748 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
     [exec] 2015-05-16 14:17:29,748 WARN  datanode.DirectoryScanner (DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been called
     [exec] 2015-05-16 14:17:29,748 INFO  datanode.DataNode (DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
     [exec] 2015-05-16 14:17:29,750 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-16 14:17:30,111 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 45799
     [exec] 2015-05-16 14:17:30,112 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 45799
     [exec] 2015-05-16 14:17:30,112 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-16 14:17:30,112 WARN  datanode.DataNode (BPServiceActor.java:offerService(701)) - BPOfferService for Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b) service to localhost/127.0.0.1:36592 interrupted
     [exec] 2015-05-16 14:17:30,113 WARN  datanode.DataNode (BPServiceActor.java:run(831)) - Ending block pool service for: Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b) service to localhost/127.0.0.1:36592
     [exec] 2015-05-16 14:17:30,214 INFO  datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-2053080698-67.195.81.148-1431785847283 (Datanode Uuid ce2c4b2a-7b5f-4550-b146-860f7611541b)
     [exec] 2015-05-16 14:17:30,214 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2518)) - Removing block pool BP-2053080698-67.195.81.148-1431785847283
     [exec] 2015-05-16 14:17:30,215 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:run(2989)) - LazyWriter was interrupted, exiting
     [exec] 2015-05-16 14:17:30,216 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(184)) - Shutting down all async disk service threads
     [exec] 2015-05-16 14:17:30,216 INFO  impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(192)) - All async disk service threads have been shut down
     [exec] 2015-05-16 14:17:30,216 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async lazy persist service threads
     [exec] 2015-05-16 14:17:30,216 INFO  impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist service threads have been shut down
     [exec] 2015-05-16 14:17:30,221 INFO  datanode.DataNode (DataNode.java:shutdown(1821)) - Shutdown complete.
     [exec] 2015-05-16 14:17:30,221 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-16 14:17:30,221 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
     [exec] 2015-05-16 14:17:30,221 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
     [exec] 2015-05-16 14:17:30,222 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 1 2 
     [exec] 2015-05-16 14:17:30,223 INFO  namenode.FSNamesystem (FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
     [exec] 2015-05-16 14:17:30,224 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-16 14:17:30,224 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001> -> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
     [exec] 2015-05-16 14:17:30,226 INFO  ipc.Server (Server.java:stop(2569)) - Stopping server on 36592
     [exec] 2015-05-16 14:17:30,226 INFO  ipc.Server (Server.java:run(724)) - Stopping IPC Server listener on 36592
     [exec] 2015-05-16 14:17:30,226 INFO  ipc.Server (Server.java:run(857)) - Stopping IPC Server Responder
     [exec] 2015-05-16 14:17:30,226 INFO  blockmanagement.BlockManager (BlockManager.java:run(3686)) - Stopping ReplicationMonitor.
     [exec] 2015-05-16 14:17:30,261 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for active state
     [exec] 2015-05-16 14:17:30,262 INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for standby state
     [exec] 2015-05-16 14:17:30,263 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
     [exec] 2015-05-16 14:17:30,364 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
     [exec] 2015-05-16 14:17:30,364 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
     [exec] 2015-05-16 14:17:30,365 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown complete.
     [echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO] 
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ hadoop-hdfs ---
[INFO] 
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.643 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.070 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-16T14:19:55+00:00
[INFO] Final Memory: 68M/696M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-hdfs: An Ant BuildException has occured: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs> does not exist.
[ERROR] around Ant part ...<copy todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...> @ 5:121 in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362668 bytes
Compression is 0.0%
Took 27 sec
Recording test results
Updating HDFS-8394
Updating HDFS-8403
Updating HDFS-8397
Updating YARN-3505
Updating YARN-3526
Updating YARN-2421

Build failed in Jenkins: Hadoop-Hdfs-trunk #2126

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2126/changes>

Changes:

[devaraj] MAPREDUCE-5708. Duplicate String.format in

[junping_du] YARN-3505. Node's Log Aggregation Report with SUCCEED should not cached in RMApps. Contributed by Xuan Gong.

[cnauroth] HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by Rakesh R.

[raviprak] YARN-1519. Check in container-executor if sysconf is implemented before using it (Radim Kolar and Eric Payne via raviprak)

[vinodkv] Fixing MR intermediate spills. Contributed by Arun Suresh.

[vinodkv] Fixing HDFS state-store. Contributed by Arun Suresh.

[aajisaka] HDFS-8371. Fix test failure in TestHdfsConfigFields for spanreceiver properties. Contributed by Ray Chiang.

[aajisaka] HDFS-8350. Remove old webhdfs.xml and other outdated documentation stuff. Contributed by Brahma Reddy Battula.

[cnauroth] HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by Dushyanth.

[vinayakumarb] HDFS-6888. Allow selectively audit logging ops (Contributed by Chen He)

[devaraj] MAPREDUCE-6273. HistoryFileManager should check whether summaryFile exists

------------------------------------------
[...truncated 6652 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.397 sec - in org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.133 sec - in org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.813 sec - in org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.191 sec - in org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.91 sec - in org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.775 sec - in org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.411 sec - in org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.287 sec - in org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.68 sec - in org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.992 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.441 sec - in org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.248 sec - in org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.14 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.942 sec - in org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.368 sec - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.711 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.43 sec - in org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.252 sec - in org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.418 sec - in org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec - in org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.252 sec - in org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.22 sec - in org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.989 sec - in org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.613 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.239 sec - in org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.224 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.614 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.512 sec - in org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.079 sec - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.018 sec - in org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.326 sec - in org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.76 sec - in org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 180.11 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.111 sec - in org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.436 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.993 sec - in org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.09 sec - in org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.293 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.885 sec - in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.538 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.969 sec - in org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.785 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.58 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.35 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.144 sec - in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.594 sec - in org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.039 sec - in org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.406 sec - in org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.372 sec - in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.825 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.573 sec - in org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.673 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.961 sec - in org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.177 sec - in org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.852 sec - in org.apache.hadoop.hdfs.TestHDFSServerPorts
Running org.apache.hadoop.hdfs.TestDFSPacket
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.121 sec - in org.apache.hadoop.hdfs.TestDFSPacket
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.839 sec - in org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.255 sec - in org.apache.hadoop.hdfs.TestDFSPermission
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.329 sec - in org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.659 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.781 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.883 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.636 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.89 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.981 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.476 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.978 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.758 sec - in org.apache.hadoop.security.TestRefreshUserMappings

Results :

Tests in error: 
  TestViewFileSystemWithXAttrs.clusterSetupAtBeginning:62 » NoClassDefFound org/...
  TestViewFileSystemWithXAttrs.ClusterShutdownAtEnd:74 NullPointer

Tests run: 3426, Failures: 0, Errors: 2, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.532 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.054 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-05-15T14:30:17+00:00
[INFO] Final Memory: 52M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363010 bytes
Compression is 0.0%
Took 12 sec
Recording test results
Updating MAPREDUCE-5708
Updating HDFS-8371
Updating HADOOP-11713
Updating YARN-3505
Updating HADOOP-11960
Updating HDFS-8350
Updating HDFS-6888
Updating MAPREDUCE-6273
Updating YARN-1519

Build failed in Jenkins: Hadoop-Hdfs-trunk #2125

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2125/changes>

Changes:

[vinayakumarb] HDFS-6300. Prevent multiple balancers from running simultaneously (Contributed by Rakesh R)

[szetszwo] HDFS-8143. Mover should exit after some retry when failed to move blocks.  Contributed by surendra singh lilhore

[kihwal] HDFS-8358. TestTraceAdmin fails. Contributed by Masatake Iwasaki.

[cnauroth] HADOOP-11966. Variable cygwin is undefined in hadoop-config.sh when executed through hadoop-daemon.sh. Contributed by Chris Nauroth.

[wangda] YARN-2921. Fix MockRM/MockAM#waitForState sleep too long. (Tsuyoshi Ozawa via wangda)

[xgong] YARN-3626. On Windows localized resources are not moved to the front of the classpath when they should be. Contributed by Craig Welch

[wangda] YARN-3579. CommonNodeLabelsManager should support NodeLabel instead of string label name when getting node-to-label/label-to-label mappings. (Sunil G via wangda)

[wangda] YARN-3521. Support return structured NodeLabel objects in REST API (Sunil G via wangda)

[jlowe] Update fix version for YARN-3457 and YARN-3537.

[jlowe] YARN-3641. NodeManager: stopRecoveryStore() shouldn't be skipped when exceptions happen in stopping NM's sub-services. Contributed by Junping Du

[cmccabe] HDFS-8380. Always call addStoredBlock on blocks which have been shifted from one storage to another (cmccabe)

[wangda] YARN-3362. Add node label usage in RM CapacityScheduler web UI. (Naganarasimha G R via wangda)

[ozawa] HADOOP-11361. Fix a race condition in MetricsSourceAdapter.updateJmxCache. Contributed by Brahma Reddy Battula.

[arp] HDFS-8243. Files written by TestHostsFiles and TestNameNodeMXBean are causing Release Audit Warnings. (Contributed by Ruth Wisniewski)

[wheat9] HDFS-7728. Avoid updating quota usage while loading edits. Contributed by Jing Zhao.

[vinayakumarb] HADOOP-8174. Remove confusing comment in Path#isAbsolute() (Contributed by Suresh Srinivas)

[vinayakumarb] HADOOP-10993. Dump java command line to *.out file (Contributed by Kengo Seki)

[vinayakumarb] HDFS-8150. Make getFileChecksum fail for blocks under construction (Contributed by J.Andreina)

------------------------------------------
[...truncated 6662 lines...]
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.044 sec - in org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.525 sec - in org.apache.hadoop.hdfs.TestDFSAddressConfig
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.665 sec - in org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestPread
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.654 sec - in org.apache.hadoop.hdfs.TestPread
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.093 sec - in org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.955 sec - in org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.583 sec - in org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 177.967 sec - in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.197 sec - in org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestDefaultNameNodePort
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.87 sec - in org.apache.hadoop.hdfs.TestDefaultNameNodePort
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.441 sec - in org.apache.hadoop.hdfs.TestRollingUpgrade
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.897 sec - in org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.563 sec - in org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.1 sec - in org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.5 sec - in org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.307 sec - in org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.358 sec - in org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.109 sec - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.883 sec - in org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.394 sec - in org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.969 sec - in org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.731 sec - in org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.244 sec - in org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.098 sec - in org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.175 sec - in org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.432 sec - in org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.524 sec - in org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.573 sec - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.211 sec - in org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.94 sec - in org.apache.hadoop.hdfs.TestFetchImage
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.05 sec - in org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.295 sec - in org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.62 sec - in org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.447 sec - in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.35 sec - in org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.702 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestRemoteBlockReader2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.47 sec - in org.apache.hadoop.hdfs.TestRemoteBlockReader2
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.395 sec - in org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.911 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.001 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.594 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.006 sec - in org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.289 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.852 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.898 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.68 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.994 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.353 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.021 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.795 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.491 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.447 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec <<< FAILURE! - in org.apache.hadoop.tools.TestHdfsConfigFields
testCompareXmlAgainstConfigurationClass(org.apache.hadoop.tools.TestHdfsConfigFields)  Time elapsed: 0.477 sec  <<< FAILURE!
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.606 sec - in org.apache.hadoop.tools.TestJMXGet

Results :

Failed tests: 
  TestHdfsConfigFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:468 hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Tests run: 3403, Failures: 1, Errors: 0, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.110 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:48 h
[INFO] Finished at: 2015-05-14T15:14:56+00:00
[INFO] Final Memory: 63M/690M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362539 bytes
Compression is 0.0%
Took 15 sec
Recording test results
Updating YARN-3521
Updating HADOOP-11966
Updating YARN-2921
Updating YARN-3457
Updating HDFS-8243
Updating YARN-3537
Updating YARN-3362
Updating HDFS-8380
Updating YARN-3579
Updating HADOOP-10993
Updating HADOOP-8174
Updating HDFS-8150
Updating HDFS-8358
Updating HDFS-8143
Updating HADOOP-11361
Updating HDFS-6300
Updating YARN-3641
Updating YARN-3626
Updating HDFS-7728

Hadoop-Hdfs-trunk - Build # 2125 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2125/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6855 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.110 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:48 h
[INFO] Finished at: 2015-05-14T15:14:56+00:00
[INFO] Final Memory: 63M/690M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362539 bytes
Compression is 0.0%
Took 15 sec
Recording test results
Updating YARN-3521
Updating HADOOP-11966
Updating YARN-2921
Updating YARN-3457
Updating HDFS-8243
Updating YARN-3537
Updating YARN-3362
Updating HDFS-8380
Updating YARN-3579
Updating HADOOP-10993
Updating HADOOP-8174
Updating HDFS-8150
Updating HDFS-8358
Updating HDFS-8143
Updating HADOOP-11361
Updating HDFS-6300
Updating YARN-3641
Updating YARN-3626
Updating HDFS-7728
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.tools.TestHdfsConfigFields.testCompareXmlAgainstConfigurationClass

Error Message:
hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Stack Trace:
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2124

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2124/changes>

Changes:

[yliu] HDFS-8255. Rename getBlockReplication to getPreferredBlockReplication. (Contributed by Zhe Zhang)

[ozawa] MAPREDUCE-6361. NPE issue in shuffle caused by concurrent issue between copySucceeded() in one thread and copyFailed() in another thread on the same host. Contributed by Junping Du.

[devaraj] YARN-3629. NodeID is always printed as "null" in node manager

[wheat9] HADOOP-11962. Sasl message with MD5 challenge text shouldn't be LOG out even in debug level. Contributed by Junping Du.

[kasha] YARN-3613. TestContainerManagerSecurity should init and start Yarn cluster in setup instead of individual methods. (nijel via kasha)

[vinodkv] MAPREDUCE-6251. Added a new config for JobClient to retry JobStatus calls so that they don't fail on history-server backed by DFSes with not so strong guarantees. Contributed by Craig Welch.

[aajisaka] HDFS-6184. Capture NN's thread dump when it fails over. Contributed by Ming Ma.

[zjshen] YARN-3539. Updated timeline server documentation and marked REST APIs evolving. Contributed by Steve Loughran.

[ozawa] MAPREDUCE-6366. mapreduce.terasort.final.sync configuration in TeraSort doesn't work. Contributed by Takuya Fukudome.

[aajisaka] HADOOP-9723. Improve error message when hadoop archive output path already exists. Contributed by Jean-Baptiste Onofré and Yongjun Zhang.

------------------------------------------
[...truncated 6682 lines...]
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2165)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.241 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.555 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.879 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.018 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.635 sec <<< FAILURE! - in org.apache.hadoop.tools.TestHdfsConfigFields
testCompareXmlAgainstConfigurationClass(org.apache.hadoop.tools.TestHdfsConfigFields)  Time elapsed: 0.477 sec  <<< FAILURE!
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.419 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.587 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.395 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.283 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.832 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.875 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.595 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.93 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.669 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.814 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.172 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.957 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.839 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.062 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.981 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.449 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.831 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.693 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.877 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.918 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.366 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.544 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.676 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.437 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.636 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.179 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.432 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.714 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.595 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.41 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.533 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.873 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.527 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.012 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.666 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.761 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.204 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.878 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.571 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.558 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.396 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.565 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.381 sec - in org.apache.hadoop.fs.TestUnbuffer

Results :

Failed tests: 
  TestHdfsConfigFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:468 hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Tests in error: 
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3399, Failures: 1, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.260 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.071 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-13T14:20:19+00:00
[INFO] Final Memory: 52M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362741 bytes
Compression is 0.0%
Took 16 sec
Recording test results
Updating HADOOP-9723
Updating MAPREDUCE-6361
Updating YARN-3613
Updating YARN-3539
Updating HDFS-6184
Updating MAPREDUCE-6366
Updating MAPREDUCE-6251
Updating HDFS-8255
Updating HADOOP-11962
Updating YARN-3629

Build failed in Jenkins: Hadoop-Hdfs-trunk #2123

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2123/changes>

Changes:

[junping_du] YARN-3587. Fix the javadoc of DelegationTokenSecretManager in yarn, etc. projects. Contributed by Gabor Liptak.

[aajisaka] HDFS-8241. Remove unused NameNode startup option -finalize. Contributed by Brahma Reddy Battula.

[aajisaka] HADOOP-11663. Remove description about Java 6 from docs. Contributed by Masatake Iwasaki.

[aw] HADOOP-11928. Test-patch check for @author tags incorrectly flags removal of @author tags (Kengo Seki via aw)

[aw] HADOOP-11951. test-patch should give better info about failures to handle dev-support updates without resetrepo option (Sean Busbey via aw)

[aw] HADOOP-11950. Add cli option to test-patch to set the project-under-test (Sean Busbey via aw)

[aw] HADOOP-11948. test-patch's issue matching regex should be configurable. (Sean Busbey via aw)

[aw] HADOOP-11947. test-patch should return early from determine-issue when run in jenkins mode. (Sean Busbey via aw)

[aw] re-commit of HADOOP-11881

[kihwal] HDFS-7916. 'reportBadBlocks' from datanodes to standby Node BPServiceActor goes for infinite loop. Contributed by Rushabh Shah.

[wangda] Moved YARN-3434. (Interaction between reservations and userlimit can result in significant ULF violation.) From 2.8.0 to 2.7.1

[jlowe] MAPREDUCE-5465. Tasks are often killed before they exit on their own. Contributed by Ming Ma

[wangda] YARN-3489. RMServerUtils.validateResourceRequests should only obtain queue info once. (Varun Saxena via wangda)

[wangda] Move YARN-3493 in CHANGES.txt from 2.8 to 2.7.1

[vinayakumarb] HDFS-8362. Java Compilation Error in TestHdfsConfigFields.java (Contributed by Arshad Mohammad)

[vinayakumarb] MAPREDUCE-6360. TestMapreduceConfigFields is placed in wrong dir, introducing compile error (Contributed by Arshad Mohammad)

[devaraj] YARN-3513. Remove unused variables in ContainersMonitorImpl and add debug

------------------------------------------
[...truncated 6727 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.894 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.437 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.778 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.936 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.476 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.482 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.388 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.615 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.612 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.547 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.723 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.85 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.882 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.817 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.878 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.213 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.12 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.001 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.601 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.984 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.791 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.337 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.95 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.71 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.812 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.088 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.45 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.526 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.627 sec <<< FAILURE! - in org.apache.hadoop.tools.TestHdfsConfigFields
testCompareXmlAgainstConfigurationClass(org.apache.hadoop.tools.TestHdfsConfigFields)  Time elapsed: 0.471 sec  <<< FAILURE!
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.244 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.1 sec <<< FAILURE! - in org.apache.hadoop.tracing.TestTraceAdmin
testCreateAndDestroySpanReceiver(org.apache.hadoop.tracing.TestTraceAdmin)  Time elapsed: 4.018 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.769 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.982 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.062 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.364 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.697 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.809 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.33 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.778 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Failed tests: 
  TestHdfsConfigFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:468 hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Tests in error: 
  TestPeerCache.testDomainSocketPeers:270 » NoClassDefFound org/apache/hadoop/ne...
  TestFSImageWithXAttr.testPersistXAttr:106->testXAttr:87->restart:129 » Bind Po...
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3398, Failures: 1, Errors: 3, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.284 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-12T14:41:08+00:00
[INFO] Final Memory: 53M/704M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362827 bytes
Compression is 0.0%
Took 19 sec
Recording test results
Updating HADOOP-11947
Updating HDFS-8241
Updating HADOOP-11948
Updating HDFS-7916
Updating MAPREDUCE-5465
Updating HDFS-8362
Updating HADOOP-11881
Updating YARN-3587
Updating HADOOP-11928
Updating YARN-3434
Updating MAPREDUCE-6360
Updating YARN-3493
Updating HADOOP-11663
Updating YARN-3513
Updating HADOOP-11951
Updating YARN-3489
Updating HADOOP-11950

Hadoop-Hdfs-trunk - Build # 2123 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2123/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6920 lines...]
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.284 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-12T14:41:08+00:00
[INFO] Final Memory: 53M/704M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362827 bytes
Compression is 0.0%
Took 19 sec
Recording test results
Updating HADOOP-11947
Updating HDFS-8241
Updating HADOOP-11948
Updating HDFS-7916
Updating MAPREDUCE-5465
Updating HDFS-8362
Updating HADOOP-11881
Updating YARN-3587
Updating HADOOP-11928
Updating YARN-3434
Updating MAPREDUCE-6360
Updating YARN-3493
Updating HADOOP-11663
Updating YARN-3513
Updating HADOOP-11951
Updating YARN-3489
Updating HADOOP-11950
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestPeerCache.testDomainSocketPeers

Error Message:
org/apache/hadoop/net/unix/DomainSocket

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/net/unix/DomainSocket
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at org.apache.hadoop.hdfs.TestPeerCache$FakePeer.getDomainSocket(TestPeerCache.java:125)
	at org.apache.hadoop.hdfs.PeerCache.putInternal(PeerCache.java:203)
	at org.apache.hadoop.hdfs.PeerCache.put(PeerCache.java:194)
	at org.apache.hadoop.hdfs.TestPeerCache.testDomainSocketPeers(TestPeerCache.java:270)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.testPersistXAttr

Error Message:
Port in use: localhost:34780

Stack Trace:
java.net.BindException: Port in use: localhost:34780
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:444)
	at sun.nio.ch.Net.bind(Net.java:436)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
	at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882)
	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:141)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:752)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:638)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:809)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:793)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1481)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1853)
	at org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1818)
	at org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.restart(TestFSImageWithXAttr.java:129)
	at org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.testXAttr(TestFSImageWithXAttr.java:87)
	at org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.testPersistXAttr(TestFSImageWithXAttr.java:106)


FAILED:  org.apache.hadoop.tools.TestHdfsConfigFields.testCompareXmlAgainstConfigurationClass

Error Message:
hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Stack Trace:
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2122

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2122/changes>

Changes:

[aajisaka] HDFS-8351. Remove namenode -finalize option from document. (aajisaka)

------------------------------------------
[...truncated 6697 lines...]
#  SIGBUS (0x7) at pc=0x00007f16e5823982, pid=22403, tid=139736364795648
#
# JRE version: Java(TM) SE Runtime Environment (7.0_55-b13) (build 1.7.0_55-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libzip.so+0x4982]  newEntry+0x62
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid22403.log>
Compiled method (nm)    2378   46     n       java.util.zip.ZipFile::getEntry (native)
 total in heap  [0x00007f16dd0761d0,0x00007f16dd076568] = 920
 relocation     [0x00007f16dd0762f0,0x00007f16dd076350] = 96
 main code      [0x00007f16dd076360,0x00007f16dd076568] = 520
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Aborted
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.474 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.42 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.496 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.073 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.834 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.521 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.094 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.242 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.434 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.322 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.754 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.194 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.441 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.981 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.542 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.832 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.655 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.41 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.23 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.198 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.651 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.994 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.567 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.771 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.745 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.764 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.871 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.923 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.145 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.052 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.001 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.36 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.859 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.872 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.292 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.979 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.626 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.824 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.044 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.609 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.636 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.28 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.001 sec <<< FAILURE! - in org.apache.hadoop.tracing.TestTraceAdmin
testCreateAndDestroySpanReceiver(org.apache.hadoop.tracing.TestTraceAdmin)  Time elapsed: 3.919 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.512 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.964 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.088 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.808 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.641 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.836 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.25 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.889 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Failed tests: 
  TestHdfsConfigFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:468 hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Tests in error: 
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3385, Failures: 1, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.224 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-11T14:19:48+00:00
[INFO] Final Memory: 59M/705M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs> && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3162287917913898657.jar> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5640927633655737717tmp> <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4629163836801978104477tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362828 bytes
Compression is 0.0%
Took 11 sec
Recording test results
Updating HDFS-8351

Build failed in Jenkins: Hadoop-Hdfs-trunk #2121

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2121/changes>

Changes:

[junping_du] MAPREDUCE-6359. In RM HA setup, Cluster tab links populated with AM hostname instead of RM. Contributed by zhaoyunjiong.

[kasha] YARN-1287. Consolidate MockClocks. (Sebastian Wong and Anubhav Dhoot via kasha)

[kasha] MAPREDUCE-6353. Divide by zero error in MR AM when calculating available containers. (Anubhav Dhoot via kasha)

[kasha] YARN-3395. FairScheduler: Trim whitespaces when using username for queuename. (Zhihai Xu via kasha)

[wheat9] HDFS-8357. Consolidate parameters of INode.CleanSubtree() into a parameter objects. Contributed by Li Lu.

------------------------------------------
[...truncated 6690 lines...]
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 381.071 sec - in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.538 sec - in org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.961 sec - in org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.316 sec - in org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.222 sec - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.948 sec - in org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.663 sec - in org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.209 sec - in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.131 sec - in org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.431 sec - in org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.809 sec - in org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.004 sec <<< FAILURE! - in org.apache.hadoop.tracing.TestTraceAdmin
testCreateAndDestroySpanReceiver(org.apache.hadoop.tracing.TestTraceAdmin)  Time elapsed: 3.923 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.23 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.589 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.896 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.093 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.434 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.659 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.118 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.365 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.86 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.898 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.626 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.961 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.089 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.703 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.81 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.141 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.979 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.854 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.109 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.958 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.54 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.828 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.77 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.852 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.915 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.34 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.612 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.68 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.615 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.673 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.198 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.392 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.605 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.611 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.488 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.488 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.446 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.352 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.245 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.21 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.695 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.795 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.31 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.417 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.573 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.566 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.362 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.477 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.417 sec - in org.apache.hadoop.fs.TestUnbuffer

Results :

Failed tests: 
  TestHdfsConfigFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:468 hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Tests in error: 
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3397, Failures: 1, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.887 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-10T14:20:23+00:00
[INFO] Final Memory: 52M/689M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362810 bytes
Compression is 0.0%
Took 6.4 sec
Recording test results
Updating MAPREDUCE-6359
Updating HDFS-8357
Updating YARN-1287
Updating MAPREDUCE-6353
Updating YARN-3395

Hadoop-Hdfs-trunk - Build # 2121 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2121/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6883 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.887 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:45 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.056 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-05-10T14:20:23+00:00
[INFO] Final Memory: 52M/689M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362810 bytes
Compression is 0.0%
Took 6.4 sec
Recording test results
Updating MAPREDUCE-6359
Updating HDFS-8357
Updating YARN-1287
Updating MAPREDUCE-6353
Updating YARN-3395
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.tools.TestHdfsConfigFields.testCompareXmlAgainstConfigurationClass

Error Message:
hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Stack Trace:
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2120

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/changes>

Changes:

[jlowe] YARN-3554. Default value for maximum nodemanager connect wait time is too high. Contributed by Naganarasimha G R

[devaraj] YARN-2784. Make POM project names consistent. Contributed by Rohith.

[kihwal] HDFS-7894. Rolling upgrade readiness is not updated in jmx until query command is issued. Contributed by Brahma Reddy Battula.

[devaraj] MAPREDUCE-5981. Log levels of certain MR logs can be changed to DEBUG.

[evans] YARN-644: Basic null check is not performed on passed in arguments before using them in ContainerManagerImpl.startContainer

[tgraves] YARN-3600. AM container link is broken (Naganarasimha G R via tgraves

[evans] HADOOP-6842. "hadoop fs -text" does not give a useful text representation of MapWritable objects

[wheat9] HDFS-8346. libwebhdfs build fails during link due to unresolved external symbols. Contributed by Chris Nauroth.

[arp] HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S)

[arp] HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S)

[tgraves] YARN-20. More information for yarn.resourcemanager.webapp.address in yarn-default.xml (Bartosz Ługowski vai tgraves)

[Arun Suresh] HDFS-7559. Create unit test to automatically compare HDFS related classes and hdfs-default.xml. (Ray Chiang via asuresh)

[cnauroth] HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R.

[arp] HADOOP-10356. Corrections in winutils/chmod.c (Contributed by René Nyffenegger)

[kasha] MAPREDUCE-2632. Avoid calling the partitioner when the numReduceTasks is 1. (Ravi Teja Ch N V and Sunil G via kasha)

[wangda] YARN-3593. Add label-type and Improve "DEFAULT_PARTITION" in Node Labels Page. (Naganarasimha G R via wangda)

[yzhang] HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. (Esteban Gutierrez via Yongjun Zhang)

[cmccabe] HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake Iwasaki via Colin P. McCabe)

[kihwal] HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by Daryn Sharp.

[jlowe] HADOOP-7165. listLocatedStatus(path, filter) is not redefined in FilterFs. Contributed by Hairong Kuang

[jlowe] MAPREDUCE-3383. Duplicate job.getOutputValueGroupingComparator() in ReduceTask. Contributed by Binglin Chang

[cmccabe] HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous structures (Chengbing Liu via Colin P. McCabe)

[jlowe] HADOOP-9729. The example code of org.apache.hadoop.util.Tool is incorrect. Contributed by hellojinjie

[cdouglas] MAPREDUCE-2094. LineRecordReader should not seek into non-splittable, compressed streams.

[xyao] HADOOP-11942. Add links to SLGUserGuide to site index (Masatake Iwasaki via xyao)

[kihwal] HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock request is in edit log. Contributed by Rushabh S Shah.

[vinodkv] YARN-3018. Unified the default value for the configuration property yarn.scheduler.capacity.node-locality-delay in code and default xml file. Contributed by Nijel SF.

[xyao] HDFS-8326. Documentation about when checkpoints are run is out of date. (Misty Stanley-Jones via xyao)

[xgong] YARN-2331. Distinguish shutdown during supervision vs. shutdown for

[jianhe] YARN-3604. Fixed ZKRMStateStore#removeApplication to also disable watch. Contributed Zhihai Xu

[aw] HADOOP-11906. test-patch.sh should use file command for patch determinism (Sean Busbey via aw)

[aw] HADOOP-11590. Update sbin commands and documentation to use new --slaves option (aw)

[jlowe] MAPREDUCE-5248. Let NNBenchWithoutMR specify the replication factor for its test. Contributed by Erik Paulson

[jlowe] YARN-3476. Nodemanager can fail to delete local logs if log aggregation fails. Contributed by Rohith

[raviprak] MAPREDUCE-4750. Enable NNBenchWithoutMR in MapredTestDriver (Liang Xie and Jason Lowe via raviprak)

[rkanter] HADOOP-9737. JarFinder#getJar should delete the jar file upon destruction of the JVM (jbonofre via rkanter)

[rkanter] YARN-3473. Fix RM Web UI configuration for some properties (rchiang via rkanter)

[arp] HDFS-8097. TestFileTruncate is failing intermittently. (Contributed by Rakesh R)

[kasha] YARN-1050. Document the Fair Scheduler REST API. (Kenji Kikushima and Roman Shaposhnik via kasha)

[kasha] YARN-3271. FairScheduler: Move tests related to max-runnable-apps from TestFairScheduler to TestAppRunnability. (nijel via kasha)

[zjshen] YARN-2206. Updated document for applications REST API response examples. Contributed by Kenji Kikushima and Brahma Reddy Battula.

[aw] HADOOP-11775. Fix Javadoc typos in hadoop-openstack module (Yanjun Wang via aw)

[xgong] YARN-3602. TestResourceLocalizationService.testPublicResourceInitializesLocalDir fails Intermittently due to IOException from cleanup. Contributed by zhihai xu

[xgong] YARN-1912. ResourceLocalizer started without any jvm memory control.

[wheat9] HDFS-6757. Simplify lease manager with INodeID. Contributed by Haohui Mai.

[wheat9] Add missing entry in CHANGES.txt for HDFS-6757.

[wheat9] HDFS-8327. Compute storage type quotas in INodeFile.computeQuotaDeltaForTruncate(). Contributed by Haohui Mai.

------------------------------------------
[...truncated 6722 lines...]
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.79 sec - in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.683 sec - in org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.89 sec - in org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.414 sec - in org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.897 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.979 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.754 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.465 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.663 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.846 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.575 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.229 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.789 sec - in org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.593 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.419 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.387 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.364 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.356 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.469 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.107 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.504 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.689 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.466 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.739 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.015 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.572 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.587 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.999 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.287 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.506 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.699 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.089 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.962 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.457 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.908 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.423 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.575 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.743 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.974 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.901 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.798 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.363 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.987 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.196 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.822 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.101 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.672 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.414 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.94 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.886 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.466 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.888 sec - in org.apache.hadoop.TestGenericRefresh

Results :

Failed tests: 
  TestHdfsConfigFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass:468 hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Tests in error: 
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3397, Failures: 1, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.158 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:45 h
[INFO] Finished at: 2015-05-09T14:20:58+00:00
[INFO] Final Memory: 52M/695M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362646 bytes
Compression is 0.0%
Took 14 sec
Recording test results
Updating HDFS-5640
Updating YARN-3593
Updating YARN-3271
Updating HDFS-8097
Updating HDFS-6757
Updating YARN-3476
Updating HADOOP-11590
Updating YARN-2331
Updating HDFS-8245
Updating YARN-3554
Updating HDFS-7433
Updating HADOOP-10356
Updating HDFS-8284
Updating YARN-3473
Updating HDFS-8340
Updating HADOOP-7165
Updating MAPREDUCE-4750
Updating YARN-644
Updating YARN-1912
Updating MAPREDUCE-5981
Updating MAPREDUCE-2094
Updating YARN-3018
Updating HADOOP-9729
Updating HADOOP-6842
Updating MAPREDUCE-2632
Updating YARN-2784
Updating YARN-3600
Updating HDFS-7894
Updating YARN-3602
Updating HADOOP-11906
Updating HDFS-8274
Updating YARN-1050
Updating HDFS-8346
Updating HDFS-8327
Updating HDFS-8311
Updating YARN-20
Updating HDFS-8326
Updating HDFS-7559
Updating YARN-3604
Updating HADOOP-11942
Updating MAPREDUCE-3383
Updating YARN-2206
Updating MAPREDUCE-5248
Updating HADOOP-11775
Updating HADOOP-9737
Updating HDFS-8113

Hadoop-Hdfs-trunk - Build # 2120 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6915 lines...]
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362646 bytes
Compression is 0.0%
Took 14 sec
Recording test results
Updating HDFS-5640
Updating YARN-3593
Updating YARN-3271
Updating HDFS-8097
Updating HDFS-6757
Updating YARN-3476
Updating HADOOP-11590
Updating YARN-2331
Updating HDFS-8245
Updating YARN-3554
Updating HDFS-7433
Updating HADOOP-10356
Updating HDFS-8284
Updating YARN-3473
Updating HDFS-8340
Updating HADOOP-7165
Updating MAPREDUCE-4750
Updating YARN-644
Updating YARN-1912
Updating MAPREDUCE-5981
Updating MAPREDUCE-2094
Updating YARN-3018
Updating HADOOP-9729
Updating HADOOP-6842
Updating MAPREDUCE-2632
Updating YARN-2784
Updating YARN-3600
Updating HDFS-7894
Updating YARN-3602
Updating HADOOP-11906
Updating HDFS-8274
Updating YARN-1050
Updating HDFS-8346
Updating HDFS-8327
Updating HDFS-8311
Updating YARN-20
Updating HDFS-8326
Updating HDFS-7559
Updating YARN-3604
Updating HADOOP-11942
Updating MAPREDUCE-3383
Updating YARN-2206
Updating MAPREDUCE-5248
Updating HADOOP-11775
Updating HADOOP-9737
Updating HDFS-8113
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.tools.TestHdfsConfigFields.testCompareXmlAgainstConfigurationClass

Error Message:
hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys

Stack Trace:
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class org.apache.hadoop.hdfs.DFSConfigKeys
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2119

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2119/changes>

Changes:

[junping_du] YARN-3523. Cleanup ResourceManagerAdministrationProtocol interface audience. Contributed by Naganarasimha G R

[zjshen] YARN-3448. Added a rolling time-to-live LevelDB timeline store implementation. Contributed by Jonathan Eagles.

[szetszwo] HDFS-7980. Incremental BlockReport will dramatically slow down namenode startup.  Contributed by Walter Su

[aw] HADOOP-11936. Dockerfile references a removed image (aw)

[jianhe] YARN-3584. Fixed attempt diagnostics format shown on the UI. Contributed by nijel

[jlowe] MAPREDUCE-6279. AM should explicity exit JVM after all services have stopped. Contributed by Eric Payne

[wheat9] HDFS-8321. CacheDirectives and CachePool operations should throw RetriableException in safemode. Contributed by Haohui Mai.

[wheat9] HDFS-8037. CheckAccess in WebHDFS silently accepts malformed FsActions parameters. Contributed by Walter Su.

[jianhe] YARN-2918. RM should not fail on startup if queue's configured labels do not exist in cluster-node-labels. Contributed by Wangda Tan

[aajisaka] YARN-1832. Fix wrong MockLocalizerStatus#equals implementation. Contributed by Hong Zhiguo.

[aajisaka] YARN-3572. Correct typos in WritingYarnApplications.md. Contributed by Gabor Liptak.

[vinayakumarb] HADOOP-11922. Misspelling of threshold in log4j.properties for tests in hadoop-tools (Contributed by Gabor Liptak)

[vinayakumarb] HDFS-8257. Namenode rollingUpgrade option is incorrect in document (Contributed by J.Andreina)

[vinayakumarb] HDFS-8067. haadmin prints out stale help messages (Contributed by Ajith S)

[devaraj] YARN-3592. Fix typos in RMNodeLabelsManager. Contributed by Sunil G.

[umamahesh] HDFS-8174. Update replication count to live rep count in fsck report. Contributed by  J.Andreina

[vinayakumarb] HDFS-6291. FSImage may be left unclosed in BootstrapStandby#doRun() ( Contributed by Sanghyun Yun)

[devaraj] YARN-3358. Audit log not present while refreshing Service ACLs.

[umamahesh] HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R.

[aajisaka] HDFS-8349. Remove .xml and documentation references to dfs.webhdfs.enabled. Contributed by Ray Chiang.

[ozawa] MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application Master REST API. Contributed by Ryu Kobayashi.

[vinayakumarb] HDFS-7998. HDFS Federation : Command mentioned to add a NN to existing federated cluster is wrong (Contributed by Ajith S)

[aajisaka] HDFS-8222. Remove usage of "dfsadmin -upgradeProgress" from document which is no longer supported. Contributed by J.Andreina.

[ozawa] YARN-3589. RM and AH web UI display DOCTYPE wrongly. Contbituted by Rohith.

[umamahesh] HDFS-8108. Fsck should provide the info on mandatory option to be used along with -blocks ,-locations and -racks. Contributed by J.Andreina.

[vinayakumarb] HDFS-8187. Remove usage of '-setStoragePolicy' and '-getStoragePolicy' using dfsadmin cmd (as it is not been supported) (Contributed by J.Andreina)

[vinayakumarb] HDFS-8175. Provide information on snapshotDiff for supporting the comparison between snapshot and current status (Contributed by J.Andreina)

[ozawa] HDFS-8207. Improper log message when blockreport interval compared with initial delay. Contributed by Brahma Reddy Battula and Ashish Singhi.

[aajisaka] MAPREDUCE-6079. Rename JobImpl#username to reporterUserName. Contributed by Tsuyoshi Ozawa.

[vinayakumarb] HDFS-8209. Support different number of datanode directories in MiniDFSCluster. (Contributed by surendra singh lilhore)

[devaraj] MAPREDUCE-6342. Make POM project names consistent. Contributed by Rohith.

[vinayakumarb] HDFS-8226. Non-HA rollback compatibility broken (Contributed by J.Andreina)

[ozawa] YARN-3169. Drop YARN's overview document. Contributed by Brahma Reddy Battula.

[vinayakumarb] HDFS-6576. Datanode log is generating at root directory in security mode (Contributed by surendra singh lilhore)

[vinayakumarb] HADOOP-11877. SnappyDecompressor's Logger class name is wrong ( Contributed by surendra singh lilhore)

[vinayakumarb] HDFS-3384. DataStreamer thread should be closed immediatly when failed to setup a PipelineForAppendOrRecovery (Contributed by Uma Maheswara Rao G)

[umamahesh] HDFS-6285. tidy an error log inside BlockReceiver. Contributed by Liang Xie.

------------------------------------------
[...truncated 6730 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.542 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.565 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.393 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.669 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.075 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.592 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.898 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.476 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.643 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.141 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.376 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.439 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.073 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.479 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.969 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.568 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.661 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.716 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.799 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.837 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.865 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.157 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.133 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.976 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.358 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.972 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.862 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.289 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.968 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.451 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.792 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.019 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.447 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.686 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.293 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.044 sec <<< FAILURE! - in org.apache.hadoop.tracing.TestTraceAdmin
testCreateAndDestroySpanReceiver(org.apache.hadoop.tracing.TestTraceAdmin)  Time elapsed: 3.961 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.679 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.976 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.064 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.74 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.612 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.941 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.219 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.968 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Tests in error: 
  TestSnapshot.testSnapshot:237->runTestSnapshot:298->checkFSImage:201 » IO Time...
  TestSnapshot.setUp:120 » Runtime org.xml.sax.SAXParseException; systemId: jar:...
  TestSnapshot.setUp:120 » Runtime org.xml.sax.SAXParseException; systemId: jar:...
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3390, Failures: 0, Errors: 4, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 47.169 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:47 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.063 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:48 h
[INFO] Finished at: 2015-05-08T15:16:54+00:00
[INFO] Final Memory: 56M/686M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362738 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating HDFS-8222
Updating YARN-3592
Updating HDFS-8037
Updating HADOOP-11936
Updating MAPREDUCE-6279
Updating YARN-3572
Updating HDFS-6576
Updating HDFS-8174
Updating HADOOP-11877
Updating HDFS-8175
Updating HDFS-8257
Updating HDFS-8321
Updating MAPREDUCE-6342
Updating YARN-2918
Updating YARN-1832
Updating HDFS-6285
Updating HDFS-3384
Updating HDFS-7998
Updating YARN-3584
Updating HDFS-8108
Updating YARN-3448
Updating YARN-3523
Updating HADOOP-11922
Updating HDFS-8332
Updating YARN-3589
Updating HDFS-8067
Updating MAPREDUCE-6284
Updating HDFS-8187
Updating HDFS-8349
Updating HDFS-8207
Updating YARN-3358
Updating HDFS-8209
Updating HDFS-8226
Updating HDFS-7980
Updating MAPREDUCE-6079
Updating YARN-3169
Updating HDFS-6291

Hadoop-Hdfs-trunk - Build # 2119 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2119/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6923 lines...]
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362738 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating HDFS-8222
Updating YARN-3592
Updating HDFS-8037
Updating HADOOP-11936
Updating MAPREDUCE-6279
Updating YARN-3572
Updating HDFS-6576
Updating HDFS-8174
Updating HADOOP-11877
Updating HDFS-8175
Updating HDFS-8257
Updating HDFS-8321
Updating MAPREDUCE-6342
Updating YARN-2918
Updating YARN-1832
Updating HDFS-6285
Updating HDFS-3384
Updating HDFS-7998
Updating YARN-3584
Updating HDFS-8108
Updating YARN-3448
Updating YARN-3523
Updating HADOOP-11922
Updating HDFS-8332
Updating YARN-3589
Updating HDFS-8067
Updating MAPREDUCE-6284
Updating HDFS-8187
Updating HDFS-8349
Updating HDFS-8207
Updating YARN-3358
Updating HDFS-8209
Updating HDFS-8226
Updating HDFS-7980
Updating MAPREDUCE-6079
Updating YARN-3169
Updating HDFS-6291
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testSnapshot

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
	at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1206)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:471)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:430)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.checkFSImage(TestSnapshot.java:201)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.runTestSnapshot(TestSnapshot.java:298)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testSnapshot(TestSnapshot.java:237)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testSnapshottableDirectory

Error Message:
org.xml.sax.SAXParseException; systemId: jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.0.0-SNAPSHOT/hadoop-common-3.0.0-SNAPSHOT.jar!/core-default.xml; lineNumber: 1; columnNumber: 1; Invalid byte 1 of 1-byte UTF-8 sequence.

Stack Trace:
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.0.0-SNAPSHOT/hadoop-common-3.0.0-SNAPSHOT.jar!/core-default.xml; lineNumber: 1; columnNumber: 1; Invalid byte 1 of 1-byte UTF-8 sequence.
	at org.apache.xerces.impl.io.UTF8Reader.invalidByte(Unknown Source)
	at org.apache.xerces.impl.io.UTF8Reader.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
	at org.apache.xerces.impl.XMLEntityScanner.skipString(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.setUp(TestSnapshot.java:120)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testAllowAndDisallowSnapshot

Error Message:
org.xml.sax.SAXParseException; systemId: jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.0.0-SNAPSHOT/hadoop-common-3.0.0-SNAPSHOT.jar!/core-default.xml; lineNumber: 1; columnNumber: 1; Invalid byte 1 of 1-byte UTF-8 sequence.

Stack Trace:
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.0.0-SNAPSHOT/hadoop-common-3.0.0-SNAPSHOT.jar!/core-default.xml; lineNumber: 1; columnNumber: 1; Invalid byte 1 of 1-byte UTF-8 sequence.
	at org.apache.xerces.impl.io.UTF8Reader.invalidByte(Unknown Source)
	at org.apache.xerces.impl.io.UTF8Reader.read(Unknown Source)
	at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
	at org.apache.xerces.impl.XMLEntityScanner.skipString(Unknown Source)
	at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
	at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
	at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
	at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
	at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177)
	at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422)
	at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.setUp(TestSnapshot.java:120)


FAILED:  org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)



Build failed in Jenkins: Hadoop-Hdfs-trunk #2118

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/2118/changes>

Changes:

[xyao] HDFS-8310. Fix TestCLI.testAll 'help: help for find' on Windows. (Kiran Kumar M R via Xiaoyu Yao)

[aw] HADOOP-11813. releasedocmaker.py should use today's date instead of unreleased (Darrell Taylor via aw)

[vinodkv] YARN-3243. Moving CHANGES.txt entry to the right release.

[jianhe] YARN-3301. Fixed the format issue of the new RM attempt web page. Contributed by Xuan Gong

[rkanter] YARN-3491. PublicLocalizer#addResource is too slow. (zxu via rkanter)

[shv] HDFS-2484. checkLease should throw FileNotFoundException when file does not exist. Contributed by Rakesh R.

[junping_du] YARN-3580. [JDK8] TestClientRMService.testGetLabelsToNodes fails. Contributed by Robert Kanter.

[vinodkv] YARN-3385. Fixed a race-condition in ResourceManager's ZooKeeper based state-store to avoid crashing on duplicate deletes. Contributed by Zhihai Xu.

[cnauroth] HDFS-7833. DataNode reconfiguration does not recalculate valid volumes required, based on configured failed volumes tolerated. Contributed by Lei (Eddy) Xu.

[aajisaka] YARN-3577. Misspelling of threshold in log4j.properties for tests. Contributed by Brahma Reddy Battula.

[aajisaka] HDFS-8325. Misspelling of threshold in log4j.properties for tests. Contributed by Brahma Reddy Battula.

[aajisaka] HADOOP-10387. Misspelling of threshold in log4j.properties for tests in hadoop-common-project. Contributed by Brahma Reddy Battula.

[aajisaka] MAPREDUCE-6356. Misspelling of threshold in log4j.properties for tests. Contributed by Brahma Reddy Battula.

------------------------------------------
[...truncated 6646 lines...]
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.343 sec - in org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.914 sec - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.475 sec - in org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.294 sec - in org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.198 sec - in org.apache.hadoop.hdfs.TestFSInputChecker
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.749 sec - in org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.547 sec - in org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 35, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.268 sec - in org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.678 sec - in org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.032 sec - in org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 72, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.243 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.599 sec - in org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.1 sec - in org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.09 sec - in org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.532 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.262 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.697 sec - in org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.834 sec - in org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.688 sec - in org.apache.hadoop.fs.shell.TestHdfsTextCommand
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.895 sec - in org.apache.hadoop.fs.TestResolveHdfsSymlink
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in org.apache.hadoop.fs.TestVolumeId
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.534 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.704 sec - in org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.232 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.541 sec - in org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.951 sec - in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.848 sec - in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.76 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.338 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.439 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.788 sec - in org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.941 sec - in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.885 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.793 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.897 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.181 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.188 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.321 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.991 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.755 sec - in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.39 sec - in org.apache.hadoop.fs.permission.TestStickyBit
Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.09 sec - in org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.457 sec - in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.721 sec - in org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.989 sec - in org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.465 sec - in org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.844 sec - in org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.173 sec - in org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.163 sec <<< FAILURE! - in org.apache.hadoop.tracing.TestTraceAdmin
testCreateAndDestroySpanReceiver(org.apache.hadoop.tracing.TestTraceAdmin)  Time elapsed: 4.081 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
	at org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
	at org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
	at org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)

	at org.apache.hadoop.ipc.Client.call(Client.java:1492)
	at org.apache.hadoop.ipc.Client.call(Client.java:1423)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy21.addSpanReceiver(Unknown Source)
	at org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.addSpanReceiver(TraceAdminProtocolTranslatorPB.java:81)
	at org.apache.hadoop.tracing.TraceAdmin.addSpanReceiver(TraceAdmin.java:120)
	at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:182)
	at org.apache.hadoop.tracing.TestTraceAdmin.runTraceCommand(TestTraceAdmin.java:44)
	at org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver(TestTraceAdmin.java:74)

Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.594 sec - in org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.986 sec - in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.032 sec - in org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.305 sec - in org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.682 sec - in org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.607 sec - in org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.24 sec - in org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.702 sec - in org.apache.hadoop.cli.TestXAttrCLI

Results :

Tests in error: 
  TestTraceAdmin.testCreateAndDestroySpanReceiver:74->runTraceCommand:44 » Remote

Tests run: 3385, Failures: 0, Errors: 1, Skipped: 17

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.601 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.055 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-07T14:26:52+00:00
[INFO] Final Memory: 54M/681M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362786 bytes
Compression is 0.0%
Took 6.2 sec
Recording test results
Updating YARN-3243
Updating YARN-3580
Updating YARN-3577
Updating HADOOP-11813
Updating YARN-3385
Updating MAPREDUCE-6356
Updating YARN-3491
Updating HDFS-8310
Updating YARN-3301
Updating HDFS-8325
Updating HDFS-7833
Updating HADOOP-10387
Updating HDFS-2484