You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2013/01/02 12:50:36 UTC

Build failed in Jenkins: Hadoop-Hdfs-trunk #1273

See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1273/>

------------------------------------------
[...truncated 11364 lines...]
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11053.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11063.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11073.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11083.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11093.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11103.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11113.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11123.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11134.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11144.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11154.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11164.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11175.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11186.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11196.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11208.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11219.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11229.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11241.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11251.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11261.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11271.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11281.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11291.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11301.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11311.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11321.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11331.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11341.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11351.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11361.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11371.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11381.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11391.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11401.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11411.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11421.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11431.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11441.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11451.log>
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid11461.log>

Results :

Tests in error: 
  testURIPaths(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testText(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testCopyToLocal(org.apache.hadoop.hdfs.TestDFSShell): Timed out waiting for Mini HDFS Cluster to start
  testCount(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testDFSShell(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testRemoteException(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testGet(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testLsr(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testCopyCommandsWithForceOption(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testServerConfigRespected(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testServerConfigRespectedWithClient(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testClientConfigRespected(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread
  testNoTrashConfig(org.apache.hadoop.hdfs.TestDFSShell): unable to create new native thread

Tests run: 31, Failures: 0, Errors: 14, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [16:36.603s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:37.412s
[INFO] Finished at: Wed Jan 02 11:50:29 UTC 2013
[INFO] Final Memory: 34M/692M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts

Hadoop-Hdfs-trunk - Build # 1274 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1274/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7345 lines...]
[ERROR] location: package org.apache.hadoop.util
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/GenerationStamp.java:[27,37] cannot find symbol
[ERROR] symbol: class SequentialNumber
[ERROR] public class GenerationStamp extends SequentialNumber {
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java:[21,29] cannot find symbol
[ERROR] symbol  : class SequentialNumber
[ERROR] location: package org.apache.hadoop.util
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java:[27,22] cannot find symbol
[ERROR] symbol: class SequentialNumber
[ERROR] class INodeId extends SequentialNumber {
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[32,48] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[33,48] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[385,13] cannot find symbol
[ERROR] symbol  : method skipTo(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[393,11] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[398,18] cannot find symbol
[ERROR] symbol  : method getCurrentValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[403,18] cannot find symbol
[ERROR] symbol  : method nextValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[412,19] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[414,11] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[4768,19] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[4775,26] cannot find symbol
[ERROR] symbol  : method getCurrentValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:[4787,35] cannot find symbol
[ERROR] symbol  : method nextValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[55,4] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[55,33] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[59,4] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[59,35] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4346
Updating HDFS-4338
Updating YARN-293
Updating MAPREDUCE-4884
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1276 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1276/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10917 lines...]
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testPipelineRecoveryStress(org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover): test timed out after 120000 milliseconds
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:33616 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 

Tests run: 1021, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:28.372s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:29.134s
[INFO] Finished at: Sat Jan 05 13:11:49 UTC 2013
[INFO] Final Memory: 37M/325M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4819
Updating MAPREDUCE-4832
Updating HADOOP-9173
Updating YARN-50
Updating MAPREDUCE-4894
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1279 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1279/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10491 lines...]
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:52996 are bad. Aborting...
  testWriteReadAndDeleteEmptyFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 

Tests run: 1022, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:49.202s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:49.982s
[INFO] Finished at: Tue Jan 08 13:12:30 UTC 2013
[INFO] Final Memory: 41M/424M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-3970
Updating HADOOP-9181
Updating YARN-170
Updating HDFS-4362
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1281 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10674 lines...]
Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:52934 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 

Tests run: 1659, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:51:51.964s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:51:52.746s
[INFO] Finished at: Thu Jan 10 13:25:26 UTC 2013
[INFO] Final Memory: 47M/687M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-9183
Updating MAPREDUCE-1700
Updating HDFS-4306
Updating YARN-325
Updating YARN-320
Updating HADOOP-9155
Updating HDFS-4363
Updating HDFS-4032
Updating MAPREDUCE-4848
Updating MAPREDUCE-4907
Updating HDFS-4261
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1283 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1283/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10631 lines...]
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:45909 are bad. Aborting...
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread

Tests run: 1660, Failures: 0, Errors: 17, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:33:26.279s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:33:27.094s
[INFO] Finished at: Sat Jan 12 13:07:52 UTC 2013
[INFO] Final Memory: 47M/454M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: Failure or timeout -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4921
Updating HADOOP-9192
Updating HADOOP-9139
Updating HDFS-4384
Updating HDFS-4381
Updating HDFS-4274
Updating HDFS-4328
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1284 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1284/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10668 lines...]
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.524 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:45990 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread

Tests run: 1660, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:56.107s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20:56.889s
[INFO] Finished at: Sun Jan 13 12:54:53 UTC 2013
[INFO] Final Memory: 23M/651M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-1245
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1285 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1285/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10668 lines...]
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.2 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.443 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:49203 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 

Tests run: 1660, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:19:48.736s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:19:49.507s
[INFO] Finished at: Mon Jan 14 12:53:44 UTC 2013
[INFO] Final Memory: 16M/728M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1291 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1291/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11672 lines...]
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.119s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.201s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.15s  27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.17s  31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 15 seconds
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:11.676s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:12.460s
[INFO] Finished at: Sun Jan 20 12:55:06 UTC 2013
[INFO] Final Memory: 28M/664M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1295 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1295/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11678 lines...]
     [exec] * [47/6]    [1/29]    0.272s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.052s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.148s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.188s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:59.618s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:00.392s
[INFO] Finished at: Thu Jan 24 12:53:56 UTC 2013
[INFO] Final Memory: 26M/374M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-354
Updating HDFS-4426
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1296 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1296/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11668 lines...]
     [exec] * [48/5]    [0/0]     0.062s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0060s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.331s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.196s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.014s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 15 seconds
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:12.195s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:12.970s
[INFO] Finished at: Fri Jan 25 12:55:09 UTC 2013
[INFO] Final Memory: 27M/759M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-2264
Updating HADOOP-9242
Updating HADOOP-9245
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1297 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1297/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11670 lines...]
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.182s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.179s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.029s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:23:20.043s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:23:21.787s
[INFO] Finished at: Sat Jan 26 12:56:17 UTC 2013
[INFO] Final Memory: 28M/426M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8857
Updating MAPREDUCE-4049
Updating HADOOP-9247
Updating HDFS-4443
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-trunk - Build # 1299 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1299/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11664 lines...]
     [exec] * [47/6]    [1/29]    0.274s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.053s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0040s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.153s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.167s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:10.993s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:11.778s
[INFO] Finished at: Mon Jan 28 12:55:10 UTC 2013
[INFO] Final Memory: 27M/361M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4259
Updating HADOOP-9241
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Jenkins build is back to normal : Hadoop-Hdfs-trunk #1300

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1300/changes>


Build failed in Jenkins: Hadoop-Hdfs-trunk #1299

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1299/changes>

Changes:

[harsh] HADOOP-9241. DU refresh interval is not configurable. Contributed by Harsh J. (harsh)

[harsh] HDFS-4259. Improve pipeline DN replacement failure message. Contributed by Harsh J. (harsh)

------------------------------------------
[...truncated 11471 lines...]
     [exec] 
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.353s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.552s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.386s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.146s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.242s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.179s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.402s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.315s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.158s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.082s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.012s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.011s 214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.013s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.011s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.019s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.19s  10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.012s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.029s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.709s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.41s  127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.197s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.219s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.15s  9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.061s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0060s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.334s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.26s  49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.011s 390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.067s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.074s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.167s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.236s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.274s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.053s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0040s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.153s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.167s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:10.993s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:11.778s
[INFO] Finished at: Mon Jan 28 12:55:10 UTC 2013
[INFO] Final Memory: 27M/361M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4259
Updating HADOOP-9241

Build failed in Jenkins: Hadoop-Hdfs-trunk #1298

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1298/>

------------------------------------------
[...truncated 11476 lines...]
     [exec] 
     [exec] init:
     [exec] 
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.381s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.608s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.375s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.126s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.221s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.161s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.369s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.282s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.137s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.076s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.011s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.012s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.0090s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.018s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.17s  10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.01s  319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.027s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.757s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.383s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.181s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.185s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.277s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.0090s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.058s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0060s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.188s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.244s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.022s 390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.063s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.079s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.161s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.395s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.128s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.054s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.157s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.169s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 15 seconds
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:28.413s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:29.243s
[INFO] Finished at: Sun Jan 27 12:55:26 UTC 2013
[INFO] Final Memory: 38M/765M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts

Hadoop-Hdfs-trunk - Build # 1298 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1298/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11669 lines...]
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.128s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.054s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.157s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.169s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 15 seconds
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:28.413s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:29.243s
[INFO] Finished at: Sun Jan 27 12:55:26 UTC 2013
[INFO] Final Memory: 38M/765M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1297

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1297/changes>

Changes:

[szetszwo] HDFS-4443. Remove a trailing '`' character from the HTML code generated by NamenodeJspHelper.generateNodeData(..).  Contributed by Christian Rohling

[tucu] Amending MR CHANGES.txt to reflect that MAPREDUCE-4049/4809/4807/4808 are in branch-2

[suresh] HADOOP-9247. Parametrize Clover generateXxx properties to make them re-definable via -D in mvn calls. Contributed by Ivan A. Veselovsky.

[tucu] HADOOP-8857. hadoop.http.authentication.signature.secret.file docs should not state that secret is randomly generated. (tucu)

------------------------------------------
[...truncated 11477 lines...]
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.368s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.531s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.349s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.066s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.27s  11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.174s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.399s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.287s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.128s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.077s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.011s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.01s  215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.014s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.012s 214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.018s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.172s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.011s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.03s  4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.711s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.414s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.181s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.199s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.306s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.011s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.059s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0070s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.229s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.291s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.01s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.074s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.074s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.167s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.414s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.016s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.146s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.057s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.182s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.179s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.029s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:23:20.043s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:23:21.787s
[INFO] Finished at: Sat Jan 26 12:56:17 UTC 2013
[INFO] Final Memory: 28M/426M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8857
Updating MAPREDUCE-4049
Updating HADOOP-9247
Updating HDFS-4443

Build failed in Jenkins: Hadoop-Hdfs-trunk #1296

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1296/changes>

Changes:

[tucu] MAPREDUCE-2264. Job status exceeds 100% in some cases. (devaraj.k and sandyr via tucu)

[suresh] HADOOP-9245. mvn clean without running mvn install before fails. Contributed by Karthik Kambatla.

[suresh] HADOOP-9242. Duplicate surefire plugin config in hadoop-common. Contributed by Andrey Klochkov.

------------------------------------------
[...truncated 11475 lines...]
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.532s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.656s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.36s  14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.091s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.228s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.172s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.408s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.337s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.144s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.075s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.024s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.012s 214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.01s  215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.01s  200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.011s 214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.011s 199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.019s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.184s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.011s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.025s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.723s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.508s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.177s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.213s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.154s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.011s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.059s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0070s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.344s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.286s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.01s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.07s  14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.08s  12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.17s  20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.251s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.141s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.062s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0060s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.331s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.196s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.014s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 15 seconds
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:12.195s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:12.970s
[INFO] Finished at: Fri Jan 25 12:55:09 UTC 2013
[INFO] Final Memory: 27M/759M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-2264
Updating HADOOP-9242
Updating HADOOP-9245

Build failed in Jenkins: Hadoop-Hdfs-trunk #1295

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1295/changes>

Changes:

[szetszwo] Add .classpath, .project, .settings and target to svn:ignore.

[jlowe] YARN-354. WebAppProxyServer exits immediately after startup. Contributed by Liang Xie

[suresh] HDFS-4426. Secondary namenode shuts down immediately after startup. Contributed by Arpit Agarwal.

------------------------------------------
[...truncated 11485 lines...]
     [exec] 
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.369s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.71s  2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.435s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.211s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.241s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.208s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.379s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.286s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.142s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.08s  12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.01s  209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.016s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.01s  200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.0090s 199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.017s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.169s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.01s  319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.033s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.678s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.427s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.177s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.216s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.153s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.0090s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.189s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0070s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.274s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.246s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.02s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.063s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.072s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.158s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.267s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.272s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.052s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.148s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.188s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:59.618s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:00.392s
[INFO] Finished at: Thu Jan 24 12:53:56 UTC 2013
[INFO] Final Memory: 26M/374M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-354
Updating HDFS-4426

Build failed in Jenkins: Hadoop-Hdfs-trunk #1294

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1294/changes>

Changes:

[tomwhite] YARN-319. Submitting a job to a fair scheduler queue for which the user does not have permission causes the client to wait forever. Contributed by shenhong.

[hitesh] YARN-231. RM Restart - Add FS-based persistent store implementation for RMStateStore. Contributed by Bikas Saha

[hitesh] YARN-277. Use AMRMClient in DistributedShell to exemplify the approach. Contributed by Bikas Saha

[suresh] HADOOP-9231. Add missing CHANGES.txt

[suresh] HADOOP-9231. Parametrize staging URL for the uniformity of distributionManagement. Contributed by Konstantin Boudnik.

[sseth] MAPREDUCE-4946. Fix a performance problem for large jobs by reducing the number of map completion event type conversions. Contributed by Jason Lowe.

[tucu] MAPREDUCE-4949. Enable multiple pi jobs to run in parallel. (sandyr via tucu)

[tucu] MAPREDUCE-4808. Refactor MapOutput and MergeManager to facilitate reuse by Shuffle implementations. (masokan via tucu)

------------------------------------------
[...truncated 11489 lines...]
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.357s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.685s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.441s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.197s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.265s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.243s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.385s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.289s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.145s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.078s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.01s  209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.01s  215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.01s  200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.014s 214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.019s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.17s  10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.011s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.025s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.824s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.452s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.181s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.208s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.141s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.0090s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.063s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0060s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.385s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.233s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.01s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.072s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.07s  12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.159s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.28s  55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.13s  8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.048s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.293s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.158s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.023s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 15 seconds
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:56.567s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:57.332s
[INFO] Finished at: Wed Jan 23 12:56:34 UTC 2013
[INFO] Final Memory: 37M/407M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4946
Updating MAPREDUCE-4808
Updating YARN-277
Updating HADOOP-9231
Updating YARN-231
Updating MAPREDUCE-4949
Updating YARN-319

Hadoop-Hdfs-trunk - Build # 1294 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1294/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11682 lines...]
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.293s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.158s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.023s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 15 seconds
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:56.567s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:57.332s
[INFO] Finished at: Wed Jan 23 12:56:34 UTC 2013
[INFO] Final Memory: 37M/407M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4946
Updating MAPREDUCE-4808
Updating YARN-277
Updating HADOOP-9231
Updating YARN-231
Updating MAPREDUCE-4949
Updating YARN-319
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1293

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1293/changes>

Changes:

[todd] HDFS-4403. DFSClient can infer checksum type when not provided by reading first byte. Contributed by Todd Lipcon.

------------------------------------------
[...truncated 11481 lines...]
     [exec] init:
     [exec] 
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.442s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.534s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.384s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.168s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.248s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.198s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.384s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.169s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.247s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.084s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.01s  209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.0090s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.013s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.018s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.172s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.011s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.024s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.774s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.395s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.177s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.2s   23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.136s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.015s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.052s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0060s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.323s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.248s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.025s 390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.073s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.07s  12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.164s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.243s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0010s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.275s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.062s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.152s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.167s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:22.000s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:22.766s
[INFO] Finished at: Tue Jan 22 12:55:15 UTC 2013
[INFO] Final Memory: 37M/774M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4403

Hadoop-Hdfs-trunk - Build # 1293 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1293/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11674 lines...]
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.275s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.062s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.152s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.167s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 14 seconds
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:22.000s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:22.766s
[INFO] Finished at: Tue Jan 22 12:55:15 UTC 2013
[INFO] Final Memory: 37M/774M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4403
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1292

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1292/changes>

Changes:

[suresh] HADOOP-8924. Add CHANGES.txt description missed in commit r1435380.

------------------------------------------
[...truncated 11478 lines...]
     [exec] init:
     [exec] 
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.437s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.558s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.388s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.116s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.227s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.159s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.373s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.314s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.126s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.085s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.01s  209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.011s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.019s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.018s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.179s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.011s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.03s  4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.78s  67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.436s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.195s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.214s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.139s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.051s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.13s  1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.185s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.244s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.01s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.064s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.072s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.156s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.233s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.015s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.12s  8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.053s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.293s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.182s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 14 seconds
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:01.653s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:02.424s
[INFO] Finished at: Mon Jan 21 12:54:57 UTC 2013
[INFO] Final Memory: 31M/219M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8924

Hadoop-Hdfs-trunk - Build # 1292 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1292/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11671 lines...]
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.12s  8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.053s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.293s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.182s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 11 seconds,  Site size: 696,806 Site pages: 43
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 14 seconds
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:01.653s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:02.424s
[INFO] Finished at: Mon Jan 21 12:54:57 UTC 2013
[INFO] Final Memory: 31M/219M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8924
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1291

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1291/>

------------------------------------------
[...truncated 11479 lines...]
     [exec] 
     [exec] init:
     [exec] 
     [exec] -prepare-classpath:
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.335s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.472s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.432s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.222s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.336s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.361s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.398s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.369s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.155s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.085s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.011s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.011s 214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.018s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.011s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.011s 214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.019s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.204s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.012s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.027s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.883s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.554s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.183s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.203s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.159s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.011s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.072s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0070s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.429s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.254s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.01s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.068s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.071s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.15s  20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.29s  55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.017s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.119s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.201s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.15s  27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.17s  31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 15 seconds
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:11.676s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:12.460s
[INFO] Finished at: Sun Jan 20 12:55:06 UTC 2013
[INFO] Final Memory: 28M/664M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts

Build failed in Jenkins: Hadoop-Hdfs-trunk #1290

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1290/changes>

Changes:

[jeagles] MAPREDUCE-4458. Warn if java.library.path is used for AM or Task (Robert Parker via jeagles)

[suresh] HADOOP-8924. Add maven plugin alternative to shell script to save package-info.java. Contributed by Alejandro Abdelnur and Chris Nauroth.

[suresh] HADOOP-8924. Revert r1435372 that missed some files

[suresh] HADOOP-8924. Add maven plugin alternative to shell script to save package-info.java. Contributed by Alejandro Abdelnur and Chris Nauroth.

[sseth] MAPREDUCE-4948. Fix a failing unit test - TestYARNRunner.testHistoryServerToken. Contributed by Junping Du

[atm] HDFS-4359. Slow RPC responses from NN can prevent metrics collection on DNs. Contributed by liang xie.

------------------------------------------
[...truncated 10724 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.989 sec
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.381 sec
Running org.apache.hadoop.hdfs.TestHftpDelegationToken
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.722 sec
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.559 sec
Running org.apache.hadoop.hdfs.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.16 sec
Running org.apache.hadoop.hdfs.TestRenameWhileOpen
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.898 sec
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.479 sec
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.914 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.578 sec
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.187 sec
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.352 sec
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.956 sec
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.378 sec
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.434 sec
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.492 sec
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.518 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.306 sec
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.09 sec
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.879 sec
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.185 sec
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.017 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.646 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.317 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.289 sec
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.042 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.506 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.695 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.247 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.055 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.388 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.651 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.482 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.104 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.4 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.032 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.53 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.879 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.87 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.394 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.574 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.717 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.231 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.677 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.707 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.069 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.791 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.703 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.177 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.539 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.995 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.374 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.895 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.174 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.969 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.344 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.162 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.766 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.055 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.617 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.603 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.087 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.269 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.167 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.128 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.49 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.479 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.34 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.443 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.285 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.561 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.238 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.594 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.16 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.095 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.867 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.921 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.883 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.892 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.155 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.194 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.343 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.705 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.624 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.076 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.217 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.969 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.098 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.049 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.202 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.408 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.138 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.328 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests in error: 
  testBalancerWithNodeGroup(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup): test timed out after 60000 milliseconds

Tests run: 1662, Failures: 0, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:19:31.674s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:19:32.448s
[INFO] Finished at: Sat Jan 19 12:53:27 UTC 2013
[INFO] Final Memory: 22M/586M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4948
Updating HDFS-4359
Updating MAPREDUCE-4458
Updating HADOOP-8924

Hadoop-Hdfs-trunk - Build # 1290 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1290/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10917 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.217 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.969 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.098 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.049 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.202 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.408 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.138 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.328 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests in error: 
  testBalancerWithNodeGroup(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup): test timed out after 60000 milliseconds

Tests run: 1662, Failures: 0, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:19:31.674s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:19:32.448s
[INFO] Finished at: Sat Jan 19 12:53:27 UTC 2013
[INFO] Final Memory: 22M/586M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4948
Updating HDFS-4359
Updating MAPREDUCE-4458
Updating HADOOP-8924
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1289

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1289/changes>

Changes:

[tucu] YARN-302. Fair scheduler assignmultiple should default to false. (sandyr via tucu)

[tucu] MAPREDUCE-4923. Add toString method to TaggedInputSplit. (sandyr via tucu)

[tucu] YARN-331. Fill in missing fair scheduler documentation. (sandyr via tucu)

[atm] HDFS-4415. HostnameFilter should handle hostname resolution failures and continue processing. Contributed by Robert Kanter.

[bobby] HADOOP-8849. FileUtil#fullyDelete should grant the target directories +rwx permissions (Ivan A. Veselovsky via bobby)

[suresh] HDFS-4393. Make empty request and responses in protocol translators can be static final members. Contributed by Brandon Li.

------------------------------------------
[...truncated 11413 lines...]
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.361s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.684s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.427s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.207s 15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.255s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.227s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.385s 348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.295s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.147s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.085s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.012s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.01s  215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.012s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.019s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.174s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.011s 319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.028s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.77s  67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.425s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.201s 19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.236s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.151s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.01s  199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.058s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0070s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.391s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.237s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.011s 390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.066s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.082s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.163s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.247s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.014s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.131s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.053s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.302s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.176s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 15 seconds
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:35.686s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:36.459s
[INFO] Finished at: Fri Jan 18 12:55:28 UTC 2013
[INFO] Final Memory: 26M/379M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-331
Updating HDFS-4393
Updating HADOOP-8849
Updating MAPREDUCE-4923
Updating HDFS-4415
Updating YARN-302

Hadoop-Hdfs-trunk - Build # 1289 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1289/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11606 lines...]
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.302s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.176s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.013s 766b    images/favicon.ico
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 15 seconds
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:21:35.686s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:21:36.459s
[INFO] Finished at: Fri Jan 18 12:55:28 UTC 2013
[INFO] Final Memory: 26M/379M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-331
Updating HDFS-4393
Updating HADOOP-8849
Updating MAPREDUCE-4923
Updating HDFS-4415
Updating YARN-302
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1288

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1288/changes>

Changes:

[todd] HADOOP-9216. CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration. Contributed by Tsuyoshi Ozawa.

[todd] HADOOP-9215. when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0). Contributed by Colin Patrick McCabe.

[todd] HADOOP-9193. hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script. Contributed by Andy Isaacson.

[daryn] HADOOP-8999. Move to incompatible section of changelog

[tgraves] Preparing for release 0.23.6

------------------------------------------
[...truncated 11418 lines...]
     [exec] 
     [exec] check-contentdir:
     [exec] 
     [exec] examine-proj:
     [exec] 
     [exec] validation-props:
     [exec] Using these catalog descriptors: /home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/resources/schema/catalog.xcat>
     [exec] 
     [exec] validate-xdocs:
     [exec] 12 file(s) have been successfully validated.
     [exec] ...validated xdocs
     [exec] 
     [exec] validate-skinconf:
     [exec] 1 file(s) have been successfully validated.
     [exec] ...validated skinconf
     [exec] 
     [exec] validate-sitemap:
     [exec] 
     [exec] validate-skins-stylesheets:
     [exec] 
     [exec] validate-skins:
     [exec] 
     [exec] validate-skinchoice:
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/webapp/resources> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images> not found.
     [exec] ...validated existence of skin 'pelt'
     [exec] 
     [exec] validate-stylesheets:
     [exec] 
     [exec] validate:
     [exec] 
     [exec] site:
     [exec] 
     [exec] Copying the various non-generated resources to site.
     [exec] Warnings will be issued if the optional project resources are not found.
     [exec] This is often the case, because they are optional and so may not be available.
     [exec] Copying project resources and images to site ...
     [exec] Copied 1 empty directory to 1 empty directory under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] Copying main skin images to site ...
     [exec] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 20 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying 14 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images>
     [exec] Copying project skin images to site ...
     [exec] Copying main skin css and js files to site ...
     [exec] Copying 11 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copied 4 empty directories to 3 empty directories under <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying 4 files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin>
     [exec] Copying project skin css and js files to site ...
     [exec] 
     [exec] Finished copying the non-generated resources.
     [exec] Now Cocoon will generate the rest.
     [exec]           
     [exec] 
     [exec] Static site will be generated at:
     [exec] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
     [exec] 
     [exec] Cocoon will report the status of each document:
     [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec]   
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common> not found.
     [exec] Warning: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt> not found.
     [exec] ------------------------------------------------------------------------ 
     [exec] cocoon 2.1.12-dev
     [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights reserved.
     [exec] ------------------------------------------------------------------------ 
     [exec] 
     [exec] 
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [1/26]    [26/30]   2.501s 8.6Kb   linkmap.html
     [exec] * [3/24]    [0/0]     0.692s 2.9Kb   skin/basic.css
     [exec] X [0]                                     hdfs_design.html	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/hdfs_design.xml> (No such file or directory)
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [5/23]    [1/29]    0.488s 14.5Kb  SLG_user_guide.html
     [exec] * [6/22]    [0/0]     1.47s  15.7Kb  SLG_user_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [7/22]    [1/29]    0.208s 11.7Kb  hdfs_quota_admin_guide.html
     [exec] * [8/21]    [0/0]     0.175s 13.9Kb  hdfs_quota_admin_guide.pdf
     [exec] Fontconfig error: Cannot load default config file
     [exec] * [9/20]    [0/0]     0.51s  348b    skin/images/rc-b-l-15-1body-2menu-3menu.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [11/20]   [2/31]    0.275s 7.0Kb   index.html
     [exec] * [12/19]   [0/0]     0.128s 10.1Kb  linkmap.pdf
     [exec] * [13/31]   [13/13]   0.076s 12.3Kb  skin/screen.css
     [exec] * [15/29]   [0/0]     0.011s 209b    skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
     [exec] * [16/28]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [17/27]   [0/0]     0.0090s 215b    skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [18/26]   [0/0]     0.011s 200b    skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
     [exec] * [19/25]   [0/0]     0.01s  214b    skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
     [exec] * [20/24]   [0/0]     0.0090s 199b    skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
     [exec] * [22/22]   [0/0]     0.018s 1.2Kb   skin/print.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [24/21]   [1/29]    0.173s 10.8Kb  hdfs_editsviewer.html
     [exec] * [25/20]   [0/0]     0.01s  319b    skin/images/rc-b-r-15-1body-2menu-3menu.png
     [exec] * [27/18]   [0/0]     0.032s 4.4Kb   skin/profile.css
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileStatus.html
     [exec] ^                                    api/org/apache/hadoop/fs/Path.html
     [exec] * [28/18]   [1/63]    0.692s 67.6Kb  webhdfs.html
     [exec] WARN - Line 1 of a paragraph overflows the available area by 30000mpt. (fo:block, "dfs.web.authentication.kerberos.principal")
     [exec] WARN - Line 1 of a paragraph overflows the available area by 12000mpt. (fo:block, "dfs.web.authentication.kerberos.keytab")
     [exec] * [29/17]   [0/0]     1.399s 127.4Kb webhdfs.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [31/16]   [1/29]    0.18s  19.5Kb  hdfs_permissions_guide.html
     [exec] * [32/15]   [0/0]     0.212s 23.3Kb  hdfs_permissions_guide.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] ^                                    api/org/apache/hadoop/fs/FileSystem.html
     [exec] * [33/15]   [1/30]    0.262s 9.5Kb   libhdfs.html
     [exec] * [34/14]   [0/0]     0.012s 199b    skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
     [exec] * [35/13]   [0/0]     0.052s 8.0Kb   index.pdf
     [exec] * [36/12]   [0/0]     0.0060s 1.8Kb   images/built-with-forrest-button.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [37/13]   [2/31]    0.193s 37.1Kb  hdfs_user_guide.html
     [exec] * [38/12]   [0/0]     0.248s 49.6Kb  hdfs_user_guide.pdf
     [exec] X [0]                                     images/hdfsarchitecture.gif	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfsarchitecture.gif> (No such file or directory)
     [exec] * [40/10]   [0/0]     0.01s  390b    skin/images/rc-t-r-15-1body-2menu-3menu.png
     [exec] * [41/9]    [0/0]     0.068s 14.0Kb  libhdfs.pdf
     [exec] * [42/8]    [0/0]     0.073s 12.3Kb  hdfs_editsviewer.pdf
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [43/9]    [2/30]    0.184s 20.0Kb  faultinject_framework.html
     [exec] WARN - Page 5: Unresolved id reference "Putting+it+all+together" found.
     [exec] WARN - Page 6: Unresolved id reference "Putting+it+all+together" found.
     [exec] * [44/8]    [0/0]     0.395s 55.5Kb  faultinject_framework.pdf
     [exec] * [45/7]    [0/0]     0.0020s 30.2Kb  images/FI-framework.gif
     [exec] * [46/6]    [0/0]     0.015s 9.2Kb   images/hadoop-logo.jpg
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [47/6]    [1/29]    0.129s 8.4Kb   hftp.html
     [exec] * [48/5]    [0/0]     0.055s 10.6Kb  hftp.pdf
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg> (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.155s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.177s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml>
     [exec] 
     [exec] Total time: 16 seconds
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site>
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:08.575s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:09.350s
[INFO] Finished at: Thu Jan 17 12:56:03 UTC 2013
[INFO] Final Memory: 41M/483M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-9193
Updating HADOOP-8999
Updating HADOOP-9215
Updating HADOOP-9216

Hadoop-Hdfs-trunk - Build # 1288 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1288/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11611 lines...]
     [exec] X [0]                                     images/hdfs-logo.jpg	BROKEN: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/content/xdocs/images.hdfs-logo.jpg (No such file or directory)
     [exec] * [50/3]    [0/0]     0.0050s 285b    images/instruction_arrow.png
     [exec] ^                                    api/index.html
     [exec] ^                                    jdiff/changes.html
     [exec] ^                                    releasenotes.html
     [exec] ^                                    changes.html
     [exec] * [51/3]    [1/29]    0.155s 27.3Kb  hdfs_imageviewer.html
     [exec] * [52/2]    [0/0]     0.177s 31.0Kb  hdfs_imageviewer.pdf
     [exec] * [54/0]    [0/0]     0.012s 766b    images/favicon.ico
     [exec] Total time: 0 minutes 12 seconds,  Site size: 696,806 Site pages: 43
     [exec] Java Result: 1
     [exec] 
     [exec] BUILD FAILED
     [exec] /home/jenkins/tools/forrest/latest/main/targets/site.xml:224: Error building site.
     [exec]         
     [exec] There appears to be a problem with your site build.
     [exec] 
     [exec] Read the output above:
     [exec] * Cocoon will report the status of each document:
     [exec]     - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
     [exec] * Even if only one link is broken, you will still get "failed".
     [exec] * Your site would still be generated, but some pages would be broken.
     [exec]   - See /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
     [exec] 
     [exec] Total time: 16 seconds
     [exec] 
     [exec]   Copying broken links file to site root.
     [exec]       
     [exec] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:22:08.575s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:22:09.350s
[INFO] Finished at: Thu Jan 17 12:56:03 UTC 2013
[INFO] Final Memory: 41M/483M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-9193
Updating HADOOP-8999
Updating HADOOP-9215
Updating HADOOP-9216
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1287

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/changes>

Changes:

[tomwhite] HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI.

[todd] Revert r1433755: HDFS-4288. NN accepts incremental BR as IBR in safemode. Contributed by Daryn Sharp.

This commit caused TestBlockManager to fail.

[todd] HDFS-4288. NN accepts incremental BR as IBR in safemode. Contributed by Daryn Sharp.

[tucu] MAPREDUCE-4924. flakey test: org.apache.hadoop.mapred.TestClusterMRNotification.testMR. (rkanter via tucu)

[suresh] HADOOP-9106. Allow configuration of IPC connect timeout. Contributed by Rober Parker.

[suresh] HADOOP-9217. Print thread dumps when hadoop-common tests fail. Contributed by Andrey Klochkov.

[todd] HADOOP-8712. Change default hadoop.security.group.mapping to JniBasedUnixGroupsNetgroupMappingWithFallback. Contributed by Robert Parker.

[sseth] YARN-135. Add missing files from last commit.

[suresh] HDFS-4392. Use NetUtils#getFreeSocketPort in MiniDFSCluster. Contributed by Andrew Purtell.

[sseth] YARN-135. Client tokens should be per app-attempt, and should be unregistered on App-finish. Contributed by Vinod Kumar Vavilapalli

[tucu] HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication. (moritzmoeller via tucu)

[tomwhite] YARN-336. Fair scheduler FIFO scheduling within a queue only allows 1 app at a time. Contributed by Sandy Ryza.

[tomwhite] YARN-335. Fair scheduler doesn't check whether rack needs containers before assigning to node. Contributed by Sandy Ryza.

[acmurthy] HDFS-4399. Fix RAT warnings by excluding images sub-dir in docs. Contributed by Thomas Graves.

[jlowe] MAPREDUCE-4936. JobImpl uber checks for cpu are wrong. Contributed by Arun C Murthy

[harsh] MAPREDUCE-4925. The pentomino option parser may be buggy. Contributed by Karthik Kambatla. (harsh)

[harsh] Moving MAPREDUCE-4678's changes line to 0.23 section to prepare for backport.

------------------------------------------
[...truncated 9892 lines...]
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.018 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.795 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.242 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 36, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 20.895 sec <<< FAILURE!
testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 125 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): DFSOutputStream is closed
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:310)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:710)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:493)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.788 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.5 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.677 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.596 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.056 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.757 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.364 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.245 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.818 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.079 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.569 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.882 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.382 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.153 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.288 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.726 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.371 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.42 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.659 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.189 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.274 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.684 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.155 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.122 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.376 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.696 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.961 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.665 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.097 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.716 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.342 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.893 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.442 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.013 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.845 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.008 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.534 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.047 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.015 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.942 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.237 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.12 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.444 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.383 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.789 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.238 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.631 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.325 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.233 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.902 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.438 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.017 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.859 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.245 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.267 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.167 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.905 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.578 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.01 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.16 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.845 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.809 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.171 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.778 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.648 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.297 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.292 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed

Tests run: 1662, Failures: 0, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:58.036s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20:58.811s
[INFO] Finished at: Wed Jan 16 12:55:04 UTC 2013
[INFO] Final Memory: 30M/392M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4392
Updating YARN-335
Updating YARN-135
Updating YARN-336
Updating HADOOP-8712
Updating HADOOP-9217
Updating MAPREDUCE-4936
Updating MAPREDUCE-4925
Updating HADOOP-9212
Updating MAPREDUCE-4924
Updating HADOOP-8816
Updating MAPREDUCE-4678
Updating HADOOP-9106
Updating HDFS-4288
Updating HDFS-4399

Hadoop-Hdfs-trunk - Build # 1287 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10085 lines...]
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.648 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.297 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.292 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed

Tests run: 1662, Failures: 0, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:58.036s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20:58.811s
[INFO] Finished at: Wed Jan 16 12:55:04 UTC 2013
[INFO] Final Memory: 30M/392M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4392
Updating YARN-335
Updating YARN-135
Updating YARN-336
Updating HADOOP-8712
Updating HADOOP-9217
Updating MAPREDUCE-4936
Updating MAPREDUCE-4925
Updating HADOOP-9212
Updating MAPREDUCE-4924
Updating HADOOP-8816
Updating MAPREDUCE-4678
Updating HADOOP-9106
Updating HDFS-4288
Updating HDFS-4399
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1286

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/changes>

Changes:

[eli] Add missing file from previous commit.

[eli] HADOOP-9178. src/main/conf is missing hadoop-policy.xml. Contributed by Sandy Ryza

[suresh] HDFS-4375. Use token request messages defined in hadoop common. Contributed by Suresh Srinivas.

[suresh] YARN-328. Use token request messages defined in hadoop common. Contributed by Suresh Srinivas.

[suresh] MAPREDUCE-4938. Use token request messages defined in hadoop common. Contributed by Suresh Srinvias.

[suresh] HADOOP-9203. RPCCallBenchmark should find a random available port. Contributec by Andrew Purtell.

[suresh] HDFS-4369. GetBlockKeysResponseProto does not handle null response. Contributed by Suresh Srinivas.

[suresh] HDFS-4364. GetLinkTargetResponseProto does not handle null path. Contributed by Suresh Srinivas.

[hitesh] YARN-330. Fix flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown. Contributed by Sandy Ryza

[todd] HDFS-3429. DataNode reads checksums even if client does not need them. Contributed by Todd Lipcon.

[bobby] HADOOP-9202. test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build (Chris Nauroth via bobby)

[tgraves] HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves)

[tgraves] HDFS-4385. Maven RAT plugin is not checking all source files (tgraves)

[tgraves] MAPREDUCE-4934. Maven RAT plugin is not checking all source files (tgraves)

[tgraves] YARN-334. Maven RAT plugin is not checking all source files (tgraves)

------------------------------------------
[...truncated 10661 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.467 sec
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.497 sec
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 138.584 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.485 sec
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.959 sec
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.655 sec
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.508 sec
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.763 sec
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.798 sec
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.889 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.163 sec
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.526 sec
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.177 sec
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.698 sec
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.022 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.782 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.224 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.663 sec
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.425 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.504 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.348 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.893 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.046 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.493 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 133.715 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.09 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.9 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.122 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.735 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.956 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.935 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.439 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.009 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.79 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.458 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.395 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.339 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.311 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.86 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.702 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.598 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.258 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.48 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.58 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.945 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.775 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.021 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.818 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.387 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.606 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.476 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.669 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.774 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.805 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.179 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.046 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.233 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.679 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.274 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.726 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.455 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.523 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.735 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.261 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.585 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.449 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.034 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.861 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.059 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.908 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.003 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.194 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.147 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.151 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.862 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.724 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.816 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.198 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.87 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.952 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.166 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.601 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.565 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.264 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.408 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Failed tests:   testBalancerEndInNoMoveProgress(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup)

Tests in error: 
  testBalancerWithNodeGroup(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup): test timed out after 60000 milliseconds

Tests run: 1662, Failures: 1, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:25.131s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20:25.899s
[INFO] Finished at: Tue Jan 15 12:54:21 UTC 2013
[INFO] Final Memory: 16M/671M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-334
Updating HADOOP-9097
Updating HDFS-4364
Updating HADOOP-9203
Updating MAPREDUCE-4938
Updating HDFS-4375
Updating HDFS-3429
Updating HADOOP-9178
Updating HDFS-4385
Updating HADOOP-9202
Updating MAPREDUCE-4934
Updating YARN-330
Updating HDFS-4369
Updating YARN-328

Hadoop-Hdfs-trunk - Build # 1286 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10854 lines...]
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.264 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.408 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Failed tests:   testBalancerEndInNoMoveProgress(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup)

Tests in error: 
  testBalancerWithNodeGroup(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup): test timed out after 60000 milliseconds

Tests run: 1662, Failures: 1, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:25.131s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20:25.899s
[INFO] Finished at: Tue Jan 15 12:54:21 UTC 2013
[INFO] Final Memory: 16M/671M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-334
Updating HADOOP-9097
Updating HDFS-4364
Updating HADOOP-9203
Updating MAPREDUCE-4938
Updating HDFS-4375
Updating HDFS-3429
Updating HADOOP-9178
Updating HDFS-4385
Updating HADOOP-9202
Updating MAPREDUCE-4934
Updating YARN-330
Updating HDFS-4369
Updating YARN-328
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1285

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1285/>

------------------------------------------
[...truncated 10475 lines...]
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 11 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:310)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:710)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:489)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.034 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.402 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.618 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.068 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.546 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.72 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.295 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 129.587 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.047 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.533 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.015 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.339 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.029 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.267 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.942 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.425 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.202 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.163 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.884 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.873 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.854 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.811 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.608 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.579 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.957 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.571 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.208 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.102 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.509 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.098 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.861 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.979 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.35 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.768 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.259 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.214 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.988 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.155 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.412 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.884 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.455 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.435 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.851 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.902 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.165 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.216 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.096 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.584 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.379 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.641 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.869 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.662 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.247 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.903 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.38 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.239 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.2 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.871 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.5 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.773 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.381 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.759 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.022 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.895 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.195 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.664 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.2 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.443 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:49203 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":37316; 

Tests run: 1660, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:19:48.736s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:19:49.507s
[INFO] Finished at: Mon Jan 14 12:53:44 UTC 2013
[INFO] Final Memory: 16M/728M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts

Build failed in Jenkins: Hadoop-Hdfs-trunk #1284

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1284/changes>

Changes:

[shv] HDFS-1245. Change typo in Pluggable.

[shv] HDFS-1245. Plugable block id generation. Contributed by Konstantin Shvachko.

------------------------------------------
[...truncated 10475 lines...]
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 55 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:303)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:710)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:489)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.235 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.506 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.506 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.302 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.052 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.38 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.598 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 134.038 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.14 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.599 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.005 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.446 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.946 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.176 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.084 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.661 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.439 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.476 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.299 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.993 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.237 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.29 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.708 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.754 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.305 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.928 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.848 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.74 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.232 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.076 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.01 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.084 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.784 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.757 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.604 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.926 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.254 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.568 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.151 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.137 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.741 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.822 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.941 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.414 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.16 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.671 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.094 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.611 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.668 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.792 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.866 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.185 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.014 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.893 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.115 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.161 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.176 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.75 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.645 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.063 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.25 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.789 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.824 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.589 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.414 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.729 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.214 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.524 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:45990 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43095; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread

Tests run: 1660, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:20:56.107s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20:56.889s
[INFO] Finished at: Sun Jan 13 12:54:53 UTC 2013
[INFO] Final Memory: 23M/651M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-1245

Build failed in Jenkins: Hadoop-Hdfs-trunk #1283

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1283/changes>

Changes:

[eli] Update CHANGES.txt to reflect HDFS-4274 merge.

[bobby] MAPREDUCE-4921. JobClient should acquire HS token with RM principal (daryn via bobby)

[eli] Update CHANGES.txt to move HDFS-4328.

[eli] HDFS-4384. test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized. Contributed by Colin Patrick McCabe

[suresh] HADOOP-9192. Move token related request/response messages to common. Contributed by Suresh Srinivas.

[bobby] HADOOP-9139 improve killKdc.sh (Ivan A. Veselovsky via bobby)

[suresh] HDFS-4381. Document fsimage format details in FSImageFormat class javadoc. Contributed by Jing Zhao.

------------------------------------------
[...truncated 10438 lines...]
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 13 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:303)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:710)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:489)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.028 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.501 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.628 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.74 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.054 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.38 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.512 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 139.455 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.133 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.76 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.005 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.609 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.857 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.371 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.268 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.902 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.077 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.29 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.522 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.24 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.82 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.864 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.915 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.64 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.135 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.062 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.153 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.411 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.095 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.077 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.224 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.293 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.742 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.33 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.964 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.732 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.5 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.611 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.058 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.968 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.647 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.437 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.856 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.446 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.022 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.618 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.235 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.094 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.663 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.388 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.775 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.865 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.857 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.034 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.869 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.216 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.132 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.399 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.663 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.646 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.928 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.281 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.831 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.813 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.212 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.116 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.755 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.126 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.548 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:45909 are bad. Aborting...
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":48678; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread

Tests run: 1660, Failures: 0, Errors: 17, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:33:26.279s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:33:27.094s
[INFO] Finished at: Sat Jan 12 13:07:52 UTC 2013
[INFO] Final Memory: 47M/454M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: Failure or timeout -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4921
Updating HADOOP-9192
Updating HADOOP-9139
Updating HDFS-4384
Updating HDFS-4381
Updating HDFS-4274
Updating HDFS-4328

Build failed in Jenkins: Hadoop-Hdfs-trunk #1282

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1282/changes>

Changes:

[atm] HDFS-4328. TestLargeBlock#testLargeBlockSize is timing out. Contributed by Chris Nauroth.

[eli] HDFS-4377. Some trivial DN comment cleanup. Contributed by Eli Collins

[eyang] HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang)

[eyang] HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang)

[suresh] HDFS-4382. Fix typo MAX_NOT_CHANGED_INTERATIONS. Contributed by Ted Yu.

[tucu] MAPREDUCE-4907. Ammendment, forgot to svn add testcase in original commit

[suresh] HDFS-4367. GetDataEncryptionKeyResponseProto does not handle null response. Contributed by Suresh Srinivas.

------------------------------------------
[...truncated 10474 lines...]
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 10 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:310)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:710)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:489)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.553 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.503 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.5 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.977 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.058 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.638 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 128.019 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.174 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.266 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.02 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.51 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.531 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.031 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.23 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.834 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.806 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.396 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.847 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.068 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.141 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.154 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.663 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.475 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.678 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.894 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.806 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.394 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.846 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.032 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.82 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.117 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.694 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.211 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.827 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.685 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.891 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.323 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.002 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.969 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.979 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.818 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.109 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.444 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.453 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.618 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.576 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.332 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.819 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.874 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.292 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.068 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.018 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.192 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.173 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.209 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.634 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.542 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.758 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.113 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.187 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.767 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.978 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.07 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.066 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.266 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.18 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:35985 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 

Tests run: 1660, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:34:12.330s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:34:13.108s
[INFO] Finished at: Fri Jan 11 13:07:50 UTC 2013
[INFO] Final Memory: 48M/388M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: Failure or timeout -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8419
Updating HDFS-4382
Updating HDFS-4377
Updating HDFS-4367
Updating MAPREDUCE-4907
Updating HDFS-4328

Hadoop-Hdfs-trunk - Build # 1282 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1282/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10667 lines...]
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:35985 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":50208; 

Tests run: 1660, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:34:12.330s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:34:13.108s
[INFO] Finished at: Fri Jan 11 13:07:50 UTC 2013
[INFO] Final Memory: 48M/388M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: Failure or timeout -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-8419
Updating HDFS-4382
Updating HDFS-4377
Updating HDFS-4367
Updating MAPREDUCE-4907
Updating HDFS-4328
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1281

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/changes>

Changes:

[tomwhite] HADOOP-9183. Potential deadlock in ActiveStandbyElector.

[eli] HDFS-4032. Specify the charset explicitly rather than rely on the default. Contributed by Eli Collins

[tucu] MAPREDUCE-4907. TrackerDistributedCacheManager issues too many getFileStatus calls. (sandyr via tucu)

[atm] HADOOP-9155. FsPermission should have different default value, 777 for directory and 666 for file. Contributed by Binglin Chang.

[jlowe] MAPREDUCE-4848. TaskAttemptContext cast error during AM recovery. Contributed by Jerry Chen

[atm] HDFS-4306. PBHelper.convertLocatedBlock miss convert BlockToken. Contributed by Binglin Chang.

[suresh] HDFS-4363. Combine PBHelper and HdfsProtoUtil and remove redundant methods. Contributed by Suresh Srinivas.

[tgraves] YARN-325. RM CapacityScheduler can deadlock when getQueueInfo() is called and a container is completing (Arun C Murthy via tgraves)

[sseth] YARN-320. RM should always be able to renew its own tokens. Contributed by Daryn Sharp

[tomwhite] MAPREDUCE-1700. User supplied dependencies may conflict with MapReduce system JARs.

[szetszwo] HDFS-4261. Fix bugs in Balaner causing infinite loop and TestBalancerWithNodeGroup timeing out.  Contributed by Junping Du

------------------------------------------
[...truncated 10481 lines...]
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:310)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:111)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:710)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:489)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.974 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.505 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.608 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 301.757 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.086 sec
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.414 sec
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.61 sec
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 128.285 sec
Running org.apache.hadoop.hdfs.TestListPathServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.107 sec
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.027 sec
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.998 sec
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.398 sec
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.929 sec
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.766 sec
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.758 sec
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.23 sec
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.736 sec
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.451 sec
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.695 sec
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.734 sec
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.895 sec
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.823 sec
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.689 sec
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.223 sec
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.506 sec
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.045 sec
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.877 sec
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.454 sec
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.57 sec
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.048 sec
Running org.apache.hadoop.hdfs.TestParallelLocalRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.601 sec
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.146 sec
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.764 sec
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.324 sec
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.64 sec
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.683 sec
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.421 sec
Running org.apache.hadoop.hdfs.TestLease
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.339 sec
Running org.apache.hadoop.hdfs.TestListFilesInFileContext
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.24 sec
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.026 sec
Running org.apache.hadoop.hdfs.TestDFSClientFailover
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.936 sec
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.471 sec
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.757 sec
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.388 sec
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.26 sec
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.732 sec
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.237 sec
Running org.apache.hadoop.hdfs.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.hdfs.util.TestGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.607 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.297 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.048 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.9 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.753 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.007 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.995 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.158 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.156 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.12 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.823 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.601 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.884 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.091 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.615 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.868 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.652 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.135 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 69, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.317 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.158 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.459 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec

Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:52934 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): DFSOutputStream is closed
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): File /test/hadoop/file could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.(..)
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":54581; 

Tests run: 1659, Failures: 0, Errors: 18, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:51:51.964s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:51:52.746s
[INFO] Finished at: Thu Jan 10 13:25:26 UTC 2013
[INFO] Final Memory: 47M/687M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HADOOP-9183
Updating MAPREDUCE-1700
Updating HDFS-4306
Updating YARN-325
Updating YARN-320
Updating HADOOP-9155
Updating HDFS-4363
Updating HDFS-4032
Updating MAPREDUCE-4848
Updating MAPREDUCE-4907
Updating HDFS-4261

Build failed in Jenkins: Hadoop-Hdfs-trunk #1280

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/changes>

Changes:

[vinodkv] MAPREDUCE-4810. Added new admin command options for MR AM. Contributed by Jerry Chen.

[acmurthy] MAPREDUCE-4520. Added support for MapReduce applications to request for CPU cores along-with memory post YARN-2. Contributed by Arun C. Murthy.

[acmurthy] YARN-2. Enhanced CapacityScheduler to account for CPU alongwith memory for multi-dimensional resource scheduling. Contributed by Arun C. Murthy.

[szetszwo] svn merge -c -1428729 . for reverting HDFS-4352. Encapsulate arguments to BlockReaderFactory in a class

[szetszwo] svn merge -c -1430507 . for reverting HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes

[eli] HDFS-4035. LightWeightGSet and LightWeightHashSet increment a volatile without synchronization. Contributed by Eli Collins

[eli] HDFS-4034. Remove redundant null checks. Contributed by Eli Collins

[eli] Updated CHANGES.txt to add HDFS-4033.

[eli] HDFS-4033. Miscellaneous findbugs 2 fixes. Contributed by Eli Collins

[todd] HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes. Contributed by Colin Patrick McCabe.

[eli] HDFS-4031. Update findbugsExcludeFile.xml to include findbugs 2 exclusions. Contributed by Eli Collins

[eli] HDFS-4030. BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs. Contributed by Eli Collins

[suresh] HADOOP-9119. Add test to FileSystemContractBaseTest to verify integrity of overwritten files. Contributed by Steve Loughran.

[tomwhite] MAPREDUCE-4278. Cannot run two local jobs in parallel from the same gateway. Contributed by Sandy Ryza.

[vinodkv] YARN-253. Fixed container-launch to not fail when there are no local resources to localize. Contributed by Tom White.

------------------------------------------
[...truncated 10309 lines...]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 14 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:301)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:464)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingFile(FileSystemContractBaseTest.java:410)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 15 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:301)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:464)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingDirectory(FileSystemContractBaseTest.java:419)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 11 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:301)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:464)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testInputStreamClosedTwice(FileSystemContractBaseTest.java:441)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 10 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:301)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOutputStreamClosedTwice(FileSystemContractBaseTest.java:453)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 13 sec  <<< ERROR!
java.io.IOException: Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:301)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.writeAndRead(FileSystemContractBaseTest.java:530)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOverWriteAndRead(FileSystemContractBaseTest.java:489)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:33460 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread

Tests run: 1024, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:24.071s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:24.844s
[INFO] Finished at: Wed Jan 09 13:12:05 UTC 2013
[INFO] Final Memory: 38M/774M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4352
Updating HDFS-4353
Updating HDFS-4035
Updating MAPREDUCE-4278
Updating HDFS-4034
Updating MAPREDUCE-4520
Updating HDFS-4033
Updating MAPREDUCE-4810
Updating YARN-253
Updating YARN-2
Updating HADOOP-9119
Updating HDFS-4030
Updating HDFS-4031

Hadoop-Hdfs-trunk - Build # 1280 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10502 lines...]
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":46057; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testOverWriteAndRead(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread

Tests run: 1024, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:24.071s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:24.844s
[INFO] Finished at: Wed Jan 09 13:12:05 UTC 2013
[INFO] Final Memory: 38M/774M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4352
Updating HDFS-4353
Updating HDFS-4035
Updating MAPREDUCE-4278
Updating HDFS-4034
Updating MAPREDUCE-4520
Updating HDFS-4033
Updating MAPREDUCE-4810
Updating YARN-253
Updating YARN-2
Updating HADOOP-9119
Updating HDFS-4030
Updating HDFS-4031
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1279

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1279/changes>

Changes:

[suresh] HDFS-4362. GetDelegationTokenResponseProto does not handle null token. Contributed by Suresh Srinivas.

[atm] HDFS-3970. Fix bug causing rollback of HDFS upgrade to result in bad VERSION file. Contributed by Vinay and Andrew Wang.

[szetszwo] Add target, .classpath, .project and .settings to svn:ignore.

[suresh] HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool. Contributed by Liang Xie.

[vinodkv] YARN-170. Change NodeManager stop to be reentrant. Contributed by Sandy Ryza.

------------------------------------------
[...truncated 10298 lines...]
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameFileAsExistingDirectory(FileSystemContractBaseTest.java:388)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 11 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryMoveToExistingDirectory(FileSystemContractBaseTest.java:411)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 12 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingFile(FileSystemContractBaseTest.java:434)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 11 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingDirectory(FileSystemContractBaseTest.java:443)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 7 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testInputStreamClosedTwice(FileSystemContractBaseTest.java:465)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 6 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOutputStreamClosedTwice(FileSystemContractBaseTest.java:477)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:52996 are bad. Aborting...
  testWriteReadAndDeleteEmptyFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":43325; 

Tests run: 1022, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:49.202s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:49.982s
[INFO] Finished at: Tue Jan 08 13:12:30 UTC 2013
[INFO] Final Memory: 41M/424M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-3970
Updating HADOOP-9181
Updating YARN-170
Updating HDFS-4362

Build failed in Jenkins: Hadoop-Hdfs-trunk #1278

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1278/changes>

Changes:

[vinodkv] MAPREDUCE-4920. Use security token protobuf definition from hadoop common. Contributed by Suresh Srinivas.

[vinodkv] YARN-315. Using the common security token protobuf definition from hadoop common. Contributed by Suresh Srinivas.

[vinodkv] YARN-217. Fix RMAdmin protocol description to make it work in secure mode also. Contributed by Devaraj K.

[szetszwo] HDFS-4351.  In BlockPlacementPolicyDefault.chooseTarget(..), numOfReplicas needs to be updated when avoiding stale nodes.  Contributed by Andrew Wang

------------------------------------------
[...truncated 9680 lines...]
Running org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.114 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.593 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAConfiguration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.155 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.413 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAWebUI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.118 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.348 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.921 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.69 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.005 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.463 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.124 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.843 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSZKFailoverController
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.496 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.775 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.594 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.776 sec
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.183 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.986 sec
Running org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.235 sec
Running org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.808 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditsDoubleBuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.963 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.193 sec
Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.373 sec
Running org.apache.hadoop.hdfs.server.namenode.TestMetaSave
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.022 sec
Running org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.135 sec
Running org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.537 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.944 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.472 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.138 sec
Running org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.1 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.217 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.138 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNode
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.073 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.152 sec
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.658 sec
Running org.apache.hadoop.hdfs.server.namenode.TestClusterId
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.158 sec
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.032 sec
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.987 sec
Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.025 sec
Running org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.933 sec
Running org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.402 sec
Running org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.555 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.16 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.146 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.431 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.372 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.207 sec
Running org.apache.hadoop.hdfs.server.namenode.TestPathComponents
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.157 sec
Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.344 sec
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.759 sec
Running org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.302 sec
Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.465 sec
Running org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.105 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.337 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.112 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStreamFile
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.203 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.538 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.052 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.042 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFsLimits
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.121 sec
Running org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.789 sec
Running org.apache.hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.468 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.608 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStartup
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.446 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.704 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.432 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.549 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.072 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.582 sec
Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.86 sec
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.892 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeJspHelper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.047 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.949 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.06 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.473 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.827 sec
Running org.apache.hadoop.hdfs.server.datanode.TestHSync
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.406 sec
Running org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 146.075 sec
Running org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.97 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.65 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.48 sec
Running org.apache.hadoop.hdfs.server.datanode.TestStartSecureDataNode
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.065 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.81 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.216 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.898 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.894 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.927 sec
Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.108 sec
Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.884 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.41 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.637 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.419 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.693 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.261 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.393 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.636 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeJsp
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.703 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.595 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.157 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.414 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.513 sec
Running org.apache.hadoop.hdfs.server.common.TestGetUriFromString
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec
Running org.apache.hadoop.hdfs.server.common.TestJspHelper
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.452 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.388 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.06 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.83 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

Results :

Tests run: 1310, Failures: 0, Errors: 0, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:04.225s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:05.438s
[INFO] Finished at: Mon Jan 07 13:25:43 UTC 2013
[INFO] Final Memory: 51M/487M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-315
Updating MAPREDUCE-4920
Updating HDFS-4351
Updating YARN-217

Hadoop-Hdfs-trunk - Build # 1278 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1278/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 9873 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.393 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.636 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeJsp
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.703 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.595 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.157 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.414 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.513 sec
Running org.apache.hadoop.hdfs.server.common.TestGetUriFromString
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec
Running org.apache.hadoop.hdfs.server.common.TestJspHelper
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.452 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.388 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.06 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.83 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

Results :

Tests run: 1310, Failures: 0, Errors: 0, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:04.225s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:05.438s
[INFO] Finished at: Mon Jan 07 13:25:43 UTC 2013
[INFO] Final Memory: 51M/487M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-315
Updating MAPREDUCE-4920
Updating HDFS-4351
Updating YARN-217
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1277

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1277/changes>

Changes:

[tgraves] MAPREDUCE-4913. TestMRAppMaster#testMRAppMasterMissingStaging occasionally  exits (Jason Lowe via tgraves)

------------------------------------------
[...truncated 10293 lines...]
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameFileAsExistingDirectory(FileSystemContractBaseTest.java:388)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 12 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryMoveToExistingDirectory(FileSystemContractBaseTest.java:411)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 13 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingFile(FileSystemContractBaseTest.java:434)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 13 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingDirectory(FileSystemContractBaseTest.java:443)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 7 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testInputStreamClosedTwice(FileSystemContractBaseTest.java:465)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 7 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOutputStreamClosedTwice(FileSystemContractBaseTest.java:477)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:45364 are bad. Aborting...
  testWriteReadAndDeleteEmptyFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 

Tests run: 1021, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:26.362s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:27.167s
[INFO] Finished at: Sun Jan 06 13:12:27 UTC 2013
[INFO] Final Memory: 41M/450M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4913

Hadoop-Hdfs-trunk - Build # 1277 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1277/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10486 lines...]
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:45364 are bad. Aborting...
  testWriteReadAndDeleteEmptyFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":53191; 

Tests run: 1021, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:26.362s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:27.167s
[INFO] Finished at: Sun Jan 06 13:12:27 UTC 2013
[INFO] Final Memory: 41M/450M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4913
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1276

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1276/changes>

Changes:

[bobby] MAPREDUCE-4819. AM can rerun job after reporting final job status to the client (bobby and Bikas Saha via bobby)

[tgraves] MAPREDUCE-4894. Renewal / cancellation of JobHistory tokens (Siddharth Seth via tgraves

[tgraves] YARN-50. Implement renewal / cancellation of Delegation Tokens(Siddharth Seth via tgraves)

[jlowe] MAPREDUCE-4832. MR AM can get in a split brain situation. Contributed by Jason Lowe

[suresh] HADOOP-9173. Add security token protobuf definition to common and use it in hdfs. Contributed by Suresh Srinivas.

[suresh] HADOOP-9173. Add security token protobuf definition to common and use it in hdfs. Contributed by Suresh Srinivas.

------------------------------------------
[...truncated 10724 lines...]
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameFileAsExistingDirectory(FileSystemContractBaseTest.java:388)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 13 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryMoveToExistingDirectory(FileSystemContractBaseTest.java:411)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 13 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingFile(FileSystemContractBaseTest.java:434)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 12 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testRenameDirectoryAsExistingDirectory(FileSystemContractBaseTest.java:443)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 7 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.createFile(FileSystemContractBaseTest.java:488)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testInputStreamClosedTwice(FileSystemContractBaseTest.java:465)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract)  Time elapsed: 7 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
	at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:169)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:308)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:109)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$1.close(WebHdfsFileSystem.java:708)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.testOutputStreamClosedTwice(FileSystemContractBaseTest.java:477)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)


Results :

Tests in error: 
  testPipelineRecoveryStress(org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover): test timed out after 120000 milliseconds
  testResponseCode(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): All datanodes 127.0.0.1:33616 are bad. Aborting...
  testWriteReadAndDeleteHalfABlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Unexpected HTTP response: code=500 != 201, op=CREATE, message=unable to create new native thread
  testWriteReadAndDeleteOneBlock(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testWriteReadAndDeleteOneAndAHalfBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testWriteReadAndDeleteTwoBlocks(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testOverwrite(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testWriteInNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testDeleteRecursively(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileMoveToNonExistentDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameFileAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameDirectoryMoveToExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameDirectoryAsExistingFile(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testRenameDirectoryAsExistingDirectory(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testInputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 
  testOutputStreamClosedTwice(org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract): Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "asf005.sp2.ygridcore.net/67.195.138.27"; destination host is: "localhost":58007; 

Tests run: 1021, Failures: 0, Errors: 18, Skipped: 5

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:38:28.372s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:38:29.134s
[INFO] Finished at: Sat Jan 05 13:11:49 UTC 2013
[INFO] Final Memory: 37M/325M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4819
Updating MAPREDUCE-4832
Updating HADOOP-9173
Updating YARN-50
Updating MAPREDUCE-4894

Build failed in Jenkins: Hadoop-Hdfs-trunk #1275

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1275/changes>

Changes:

[tomwhite] YARN-286. Add a YARN ApplicationClassLoader.

[szetszwo] HDFS-4270. Introduce soft and hard limits for max replication so that replications of the highest priority are allowed to choose a source datanode that has reached its soft limit but not the hard limit.  Contributed by Derek Dagit

[todd] HDFS-4352. Encapsulate arguments to BlockReaderFactory in a class. Contributed by Colin Patrick McCabe.

[todd] HDFS-4302. Fix fatal exception when starting NameNode with DEBUG logs. Contributed by Eugene Koontz.

[atm] Add file which was accidentally missed during commit of HDFS-4346.

[sseth] YARN-103. Add a yarn AM-RM client module. Contributed by Bikas Saha.

[bobby] MAPREDUCE-4279. getClusterStatus() fails with null pointer exception when running jobs in local mode (Devaraj K via bobby)

[tomwhite] YARN-301. Fair scheduler throws ConcurrentModificationException when iterating over app's priorities. Contributed by Sandy Ryza.

[tomwhite] YARN-300. After YARN-271, fair scheduler can infinite loop and not schedule any application. Contributed by Sandy Ryza.

[tomwhite] YARN-288. Fair scheduler queue doesn't accept any jobs when ACLs are configured. Contributed by Sandy Ryza.

[tomwhite] YARN-192. Node update causes NPE in the fair scheduler. Contributed by Sandy Ryza

------------------------------------------
[...truncated 9824 lines...]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
	at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testCanReadData(org.apache.hadoop.hdfs.server.namenode.TestBackupNode)  Time elapsed: 597 sec  <<< FAILURE!
java.lang.AssertionError: Port in use: 0.0.0.0:50105
	at org.junit.Assert.fail(Assert.java:91)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCanReadData(TestBackupNode.java:474)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
	at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.189 sec
Running org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.417 sec
Running org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.94 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.56 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.218 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.105 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.353 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.071 sec
Running org.apache.hadoop.hdfs.server.namenode.TestPathComponents
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec
Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.536 sec
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.103 sec
Running org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.776 sec
Running org.apache.hadoop.hdfs.server.namenode.TestINodeFile
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.351 sec
Running org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.364 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.339 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.183 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStreamFile
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.243 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.97 sec
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.163 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.865 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFsLimits
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.079 sec
Running org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.291 sec
Running org.apache.hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.008 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.591 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStartup
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.766 sec
Running org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.615 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.344 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.39 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.069 sec
Running org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.28 sec
Running org.apache.hadoop.hdfs.server.namenode.TestGetImageServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.926 sec
Running org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.196 sec
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeJspHelper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.933 sec
Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.834 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.245 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.527 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.685 sec
Running org.apache.hadoop.hdfs.server.datanode.TestHSync
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.063 sec
Running org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 146.907 sec
Running org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.723 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.119 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.562 sec
Running org.apache.hadoop.hdfs.server.datanode.TestStartSecureDataNode
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.071 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.815 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.476 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.739 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.973 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.603 sec
Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.2 sec
Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.918 sec
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReport
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.637 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.803 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.432 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.488 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestReplicaMap
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.TestRoundRobinVolumeChoosingPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.252 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.266 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.461 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeJsp
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.75 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.707 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.741 sec
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.242 sec
Running org.apache.hadoop.hdfs.server.common.TestGetUriFromString
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec
Running org.apache.hadoop.hdfs.server.common.TestJspHelper
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.169 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.1 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.359 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.071 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

Results :

Failed tests:   testCheckpointNode(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105
  testBackupNode(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105
  testCanReadData(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105

Tests in error: 
  testBackupNodeTailsEdits(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105

Tests run: 1310, Failures: 3, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:08.353s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:09.467s
[INFO] Finished at: Fri Jan 04 12:52:35 UTC 2013
[INFO] Final Memory: 39M/680M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4352
Updating YARN-288
Updating YARN-103
Updating MAPREDUCE-4279
Updating HDFS-4346
Updating YARN-192
Updating YARN-301
Updating YARN-271
Updating YARN-300
Updating HDFS-4302
Updating HDFS-4270
Updating YARN-286

Hadoop-Hdfs-trunk - Build # 1275 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1275/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10017 lines...]
Running org.apache.hadoop.hdfs.server.common.TestJspHelper
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.169 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.1 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.359 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.071 sec
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

Results :

Failed tests:   testCheckpointNode(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105
  testBackupNode(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105
  testCanReadData(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105

Tests in error: 
  testBackupNodeTailsEdits(org.apache.hadoop.hdfs.server.namenode.TestBackupNode): Port in use: 0.0.0.0:50105

Tests run: 1310, Failures: 3, Errors: 1, Skipped: 6

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:08.353s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:09.467s
[INFO] Finished at: Fri Jan 04 12:52:35 UTC 2013
[INFO] Final Memory: 39M/680M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-hdfs: ExecutionException; nested exception is java.util.concurrent.ExecutionException: java.lang.RuntimeException: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4352
Updating YARN-288
Updating YARN-103
Updating MAPREDUCE-4279
Updating HDFS-4346
Updating YARN-192
Updating YARN-301
Updating YARN-271
Updating YARN-300
Updating HDFS-4302
Updating HDFS-4270
Updating YARN-286
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1274

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1274/changes>

Changes:

[szetszwo] HDFS-4346. Add SequentialNumber as a base class for INodeId and GenerationStamp.

[atm] HDFS-4338. TestNameNodeMetrics#testCorruptBlock is flaky. Contributed by Andrew Wang.

[jlowe] YARN-293. Node Manager leaks LocalizerRunner object for every Container. Contributed by Robert Joseph Evans

[suresh] MAPREDUCE-4884. Streaming tests fail to start MiniMRCluster due to missing queue configuration. Contributed by Chris Nauroth.

------------------------------------------
[...truncated 7152 lines...]
[ERROR] class INodeId extends SequentialNumber {
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[32,48] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[33,48] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[385,13] cannot find symbol
[ERROR] symbol  : method skipTo(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[393,11] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[398,18] cannot find symbol
[ERROR] symbol  : method getCurrentValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[403,18] cannot find symbol
[ERROR] symbol  : method nextValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[412,19] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[414,11] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4768,19] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4775,26] cannot find symbol
[ERROR] symbol  : method getCurrentValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4787,35] cannot find symbol
[ERROR] symbol  : method nextValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[55,4] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[55,33] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[59,4] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[59,35] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
+ cd hadoop-hdfs-project
+ /home/jenkins/tools/maven/latest/bin/mvn clean verify checkstyle:checkstyle findbugs:findbugs -Pdist -Pnative -Dtar -Pdocs
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache Hadoop HDFS
[INFO] Apache Hadoop HttpFS
[INFO] Apache Hadoop HDFS BookKeeper Journal
[INFO] Apache Hadoop HDFS Project
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-dir>
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-protobuf-generated-sources-directory) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java>
[INFO] Executed tasks
[INFO] 
[INFO] --- jspc-maven-plugin:2.0-alpha-3:compile (hdfs) @ hadoop-hdfs ---
[WARNING] Compiled JSPs will not be added to the project and web.xml will not be modified, either because includeInProject is set to false or because the project's packaging is not 'war'.
Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-src/main/jsp>
Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/classes>
[INFO] Compiling 8 JSP source files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-src/main/jsp>
log4j:WARN No appenders could be found for logger (org.apache.jasper.JspC).
log4j:WARN Please initialize the log4j system properly.
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html for an explanation.
[INFO] Compiled completed in 0:00:00.258
[INFO] 
[INFO] --- jspc-maven-plugin:2.0-alpha-3:compile (secondary) @ hadoop-hdfs ---
[WARNING] Compiled JSPs will not be added to the project and web.xml will not be modified, either because includeInProject is set to false or because the project's packaging is not 'war'.
[INFO] Compiling 1 JSP source file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-src/main/jsp>
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html for an explanation.
[INFO] Compiled completed in 0:00:00.017
[INFO] 
[INFO] --- jspc-maven-plugin:2.0-alpha-3:compile (journal) @ hadoop-hdfs ---
[WARNING] Compiled JSPs will not be added to the project and web.xml will not be modified, either because includeInProject is set to false or because the project's packaging is not 'war'.
[INFO] Compiling 1 JSP source file to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-src/main/jsp>
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html for an explanation.
[INFO] Compiled completed in 0:00:00.016
[INFO] 
[INFO] --- jspc-maven-plugin:2.0-alpha-3:compile (datanode) @ hadoop-hdfs ---
[WARNING] Compiled JSPs will not be added to the project and web.xml will not be modified, either because includeInProject is set to false or because the project's packaging is not 'war'.
[INFO] Compiling 3 JSP source files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-src/main/jsp>
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html for an explanation.
[INFO] Compiled completed in 0:00:00.024
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-source (add-source) @ hadoop-hdfs ---
[INFO] Source directory: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java> added.
[INFO] Source directory: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-src/main/jsp> added.
[INFO] 
[INFO] --- exec-maven-plugin:1.2:exec (compile-proto) @ hadoop-hdfs ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2:exec (compile-proto-datanode) @ hadoop-hdfs ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2:exec (compile-proto-namenode) @ hadoop-hdfs ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2:exec (compile-proto-qjournal) @ hadoop-hdfs ---
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ hadoop-hdfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-hdfs ---
[INFO] Compiling 474 source files to <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/classes>
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/GenerationStamp.java>:[21,29] cannot find symbol
symbol  : class SequentialNumber
location: package org.apache.hadoop.util
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/GenerationStamp.java>:[27,37] cannot find symbol
symbol: class SequentialNumber
public class GenerationStamp extends SequentialNumber {
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java>:[21,29] cannot find symbol
symbol  : class SequentialNumber
location: package org.apache.hadoop.util
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java>:[27,22] cannot find symbol
symbol: class SequentialNumber
class INodeId extends SequentialNumber {
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[32,48] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[33,48] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[385,13] cannot find symbol
symbol  : method skipTo(long)
location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[393,11] cannot find symbol
symbol  : method setCurrentValue(long)
location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[398,18] cannot find symbol
symbol  : method getCurrentValue()
location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[403,18] cannot find symbol
symbol  : method nextValue()
location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[412,19] cannot find symbol
symbol  : method setCurrentValue(long)
location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[414,11] cannot find symbol
symbol  : method setCurrentValue(long)
location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4768,19] cannot find symbol
symbol  : method setCurrentValue(long)
location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4775,26] cannot find symbol
symbol  : method getCurrentValue()
location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4787,35] cannot find symbol
symbol  : method nextValue()
location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[55,4] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[55,33] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[59,4] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[59,35] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[INFO] 19 errors 
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [16.236s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.000s
[INFO] Finished at: Thu Jan 03 11:32:59 UTC 2013
[INFO] Final Memory: 30M/355M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hadoop-hdfs: Compilation failure: Compilation failure:
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/GenerationStamp.java>:[21,29] cannot find symbol
[ERROR] symbol  : class SequentialNumber
[ERROR] location: package org.apache.hadoop.util
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/GenerationStamp.java>:[27,37] cannot find symbol
[ERROR] symbol: class SequentialNumber
[ERROR] public class GenerationStamp extends SequentialNumber {
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java>:[21,29] cannot find symbol
[ERROR] symbol  : class SequentialNumber
[ERROR] location: package org.apache.hadoop.util
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeId.java>:[27,22] cannot find symbol
[ERROR] symbol: class SequentialNumber
[ERROR] class INodeId extends SequentialNumber {
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[32,48] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[33,48] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[385,13] cannot find symbol
[ERROR] symbol  : method skipTo(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[393,11] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[398,18] cannot find symbol
[ERROR] symbol  : method getCurrentValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[403,18] cannot find symbol
[ERROR] symbol  : method nextValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[412,19] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[414,11] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.namenode.INodeId
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4768,19] cannot find symbol
[ERROR] symbol  : method setCurrentValue(long)
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4775,26] cannot find symbol
[ERROR] symbol  : method getCurrentValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java>:[4787,35] cannot find symbol
[ERROR] symbol  : method nextValue()
[ERROR] location: class org.apache.hadoop.hdfs.server.common.GenerationStamp
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[55,4] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[55,33] com.sun.org.apache.xml.internal.serialize.OutputFormat is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[59,4] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java>:[59,35] com.sun.org.apache.xml.internal.serialize.XMLSerializer is Sun proprietary API and may be removed in a future release
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4346
Updating HDFS-4338
Updating YARN-293
Updating MAPREDUCE-4884