You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2012/03/02 13:56:07 UTC

Hadoop-Hdfs-trunk - Build # 972 - Unstable

See https://builds.apache.org/job/Hadoop-Hdfs-trunk/972/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 12029 lines...]
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-hdfs-httpfs ---
[INFO] 
[INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-hdfs-httpfs ---
[INFO] Copying files to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/hadoop-hdfs-httpfs-0.24.0-SNAPSHOT
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (dist) @ hadoop-hdfs-httpfs ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads
      [get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.32/bin/apache-tomcat-6.0.32.tar.gz
      [get] To: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/tomcat.tar.gz
.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/tomcat.exp
     [exec] 
     [exec] gzip: stdin: unexpected end of file
     [exec] tar: Unexpected EOF in archive
     [exec] tar: Unexpected EOF in archive
     [exec] tar: Error is not recoverable: exiting now
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ SUCCESS [5:10.541s]
[INFO] Apache Hadoop HttpFS .............................. FAILURE [9.274s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5:20.582s
[INFO] Finished at: Fri Mar 02 11:40:37 UTC 2012
[INFO] Final Memory: 72M/752M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (dist) on project hadoop-hdfs-httpfs: An Ant BuildException has occured: exec returned: 2 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs-httpfs
+ /home/jenkins/tools/maven/latest/bin/mvn test -Dmaven.test.failure.ignore=true -Pclover -DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HADOOP-8124
Updating HDFS-3021
Updating HDFS-3038
Updating HDFS-3037
Updating HDFS-3036
Updating HDFS-3034
Updating MAPREDUCE-3956
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileAppend4.testRecoverFinalizedBlock

Error Message:
test timed out after 60000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 60000 milliseconds
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1186)
	at java.lang.Thread.join(Thread.java:1239)
	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.join(BPOfferService.java:444)
	at org.apache.hadoop.hdfs.server.datanode.DataNode$BlockPoolManager.shutDownAll(DataNode.java:301)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1193)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1095)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1076)
	at org.apache.hadoop.hdfs.TestFileAppend4.__CLR3_0_21z1ppc1313(TestFileAppend4.java:209)
	at org.apache.hadoop.hdfs.TestFileAppend4.testRecoverFinalizedBlock(TestFileAppend4.java:149)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)


REGRESSION:  org.apache.hadoop.hdfs.TestFileAppend4.testCompleteOtherLeaseHoldersFile

Error Message:
Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked.

Stack Trace:
java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked.
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586)
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:389)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:332)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:293)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:327)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:453)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:445)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:746)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:612)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:516)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:255)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:79)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:241)
	at org.apache.hadoop.hdfs.TestFileAppend4.__CLR3_0_269ddf91327(TestFileAppend4.java:221)
	at org.apache.hadoop.hdfs.TestFileAppend4.testCompleteOtherLeaseHoldersFile(TestFileAppend4.java:220)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)