You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2011/10/12 03:01:16 UTC

Hadoop-Hdfs-22-branch - Build # 97 - Failure

See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/97/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3108 lines...]
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache

run-test-hdfs-excluding-commit-and-smoke:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          jar:file:/home/jenkins/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and jar:file:/home/jenkins/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
    [junit] Running org.apache.hadoop.fs.TestFiListPath
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.007 sec
    [junit] Running org.apache.hadoop.fs.TestFiRename
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 75.327 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHFlush
    [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 38.603 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHftp
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 39.164 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiPipelines
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.592 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
    [junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 220.671 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
    [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 294.075 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.092 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:764: Tests failed!

Total time: 158 minutes 42 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-1762
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNodeCount.testNodeCount

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.hdfs.server.namenode.BlockManager.countNodes(BlockManager.java:1436)
	at org.apache.hadoop.hdfs.server.namenode.TestNodeCount.__CLR2_4_39bdgm6whp(TestNodeCount.java:119)
	at org.apache.hadoop.hdfs.server.namenode.TestNodeCount.testNodeCount(TestNodeCount.java:40)




Hadoop-Hdfs-22-branch - Build # 110 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/110/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 20878 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1a5b9d2" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@3fff60" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@61c01f" Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@127184e" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@12ff166" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@136aa2b" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1139283" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@d59930" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@35a6f8" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.909 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 82 minutes 28 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.




Hadoop-Hdfs-22-branch - Build # 109 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/109/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 21394 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@704262" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@ccd445" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1bb95e3" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1a8237a" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@19c2c03" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@32c3e8" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1970a76" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@6d5307" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.008 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 106 minutes 22 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2513
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.




Hadoop-Hdfs-22-branch - Build # 108 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/108/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1117 lines...]
[ivy:resolve] 		module not found: org.mortbay.jetty#jsp-2.1;6.1.26
[ivy:resolve] 	==== apache-snapshot: tried
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#jsp-2.1;6.1.26!jsp-2.1.jar:
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.jar
[ivy:resolve] 	==== maven2: tried
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#jsp-2.1;6.1.26!jsp-2.1.jar:
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.jar
[ivy:resolve] 		module not found: org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}
[ivy:resolve] 	==== apache-snapshot: tried
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}!servlet-api-2.5.jar:
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.jar
[ivy:resolve] 	==== maven2: tried
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}!servlet-api-2.5.jar:
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.jar
[ivy:resolve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] 		::          UNRESOLVED DEPENDENCIES         ::
[ivy:resolve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] 		:: org.mortbay.jetty#jsp-api-2.1;6.1.26: not found
[ivy:resolve] 		:: org.mortbay.jetty#jsp-2.1;6.1.26: not found
[ivy:resolve] 		:: org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}: not found
[ivy:resolve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] 
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:375: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build-contrib.xml:298: impossible to resolve dependencies:
	resolve failed - see output for details

Total time: 16 seconds


======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-22-branch - Build # 107 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/107/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 1116 lines...]
[ivy:resolve] 	==== apache-snapshot: tried
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#jsp-2.1;6.1.26!jsp-2.1.jar:
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.jar
[ivy:resolve] 	==== maven2: tried
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#jsp-2.1;6.1.26!jsp-2.1.jar:
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/jsp-2.1/6.1.26/jsp-2.1-6.1.26.jar
[ivy:resolve] 		module not found: org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}
[ivy:resolve] 	==== apache-snapshot: tried
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}!servlet-api-2.5.jar:
[ivy:resolve] 	  https://repository.apache.org/content/repositories/snapshots/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.jar
[ivy:resolve] 	==== maven2: tried
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.pom
[ivy:resolve] 	  -- artifact org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}!servlet-api-2.5.jar:
[ivy:resolve] 	  http://repo1.maven.org/maven2/org/mortbay/jetty/servlet-api-2.5/${servlet-api-2.5.version}/servlet-api-2.5-${servlet-api-2.5.version}.jar
[ivy:resolve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] 		::          UNRESOLVED DEPENDENCIES         ::
[ivy:resolve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] 		:: org.mortbay.jetty#jsp-api-2.1;6.1.26: not found
[ivy:resolve] 		:: org.mortbay.jetty#jsp-2.1;6.1.26: not found
[ivy:resolve] 		:: org.mortbay.jetty#servlet-api-2.5;${servlet-api-2.5.version}: not found
[ivy:resolve] 		::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve] 
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:375: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build-contrib.xml:298: impossible to resolve dependencies:
	resolve failed - see output for details

Total time: 16 seconds


======================================================================
======================================================================
STORE: saving artifacts
======================================================================
======================================================================


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/*.jar': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2513
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Hdfs-22-branch - Build # 106 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/106/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 10649 lines...]
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@12017c1" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@10e60a0" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1eac168" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@834d12" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@e69939" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@143ab7c" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@14bd0c4" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@94531f" 
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.869 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 119 minutes 35 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigs

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


FAILED:  org.apache.hadoop.hdfs.TestModTime.testModTime

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 105 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/105/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 23264 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@890dec" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@f794b" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@c86671" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@14d04dd" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@163fc6d" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@69b9d1" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@236e1b" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@9923c4" 
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.831 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 149 minutes 11 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2002
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery.testBlockSynchronization

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.TestModTime.testModTime

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testCorruptBlock

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 104 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/104/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 706711 lines...]
    [junit] 2011-10-24 21:06:22,601 INFO  datanode.DataNode (DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads is 2
    [junit] 2011-10-24 21:06:22,602 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-10-24 21:06:22,602 INFO  datanode.DataNode (DataNode.java:run(1451)) - DatanodeRegistration(127.0.0.1:33846, storageID=DS-1713354651-67.195.138.28-33846-1319490372000, infoPort=60237, ipcPort=47606):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data4/current/finalized'}
    [junit] 2011-10-24 21:06:22,602 INFO  ipc.Server (Server.java:stop(1693)) - Stopping server on 47606
    [junit] 2011-10-24 21:06:22,602 INFO  datanode.DataNode (DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-10-24 21:06:22,603 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-10-24 21:06:22,603 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-10-24 21:06:22,603 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-10-24 21:06:22,603 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(770)) - Shutting down DataNode 0
    [junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:stop(1693)) - Stopping server on 56297
    [junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 0 on 56297: exiting
    [junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:run(498)) - Stopping IPC Server listener on 56297
    [junit] 2011-10-24 21:06:22,705 INFO  ipc.Server (Server.java:run(638)) - Stopping IPC Server Responder
    [junit] 2011-10-24 21:06:22,705 INFO  datanode.DataNode (DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2011-10-24 21:06:22,806 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(622)) - Exiting DataBlockScanner thread.
    [junit] 2011-10-24 21:06:22,806 INFO  datanode.DataNode (DataNode.java:run(1451)) - DatanodeRegistration(127.0.0.1:48594, storageID=DS-2089348076-67.195.138.28-48594-1319490371879, infoPort=35335, ipcPort=56297):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data/dfs/data/data2/current/finalized'}
    [junit] 2011-10-24 21:06:22,806 INFO  ipc.Server (Server.java:stop(1693)) - Stopping server on 56297
    [junit] 2011-10-24 21:06:22,806 INFO  datanode.DataNode (DataNode.java:shutdown(770)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2011-10-24 21:06:22,807 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads...
    [junit] 2011-10-24 21:06:22,807 INFO  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down.
    [junit] 2011-10-24 21:06:22,807 WARN  datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down.
    [junit] 2011-10-24 21:06:22,909 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2911)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2011-10-24 21:06:22,909 WARN  namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2011-10-24 21:06:22,909 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(637)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 4 SyncTimes(ms): 61 3 
    [junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:stop(1693)) - Stopping server on 45512
    [junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 0 on 45512: exiting
    [junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 1 on 45512: exiting
    [junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 2 on 45512: exiting
    [junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 4 on 45512: exiting
    [junit] 2011-10-24 21:06:22,921 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 3 on 45512: exiting
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 6 on 45512: exiting
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 5 on 45512: exiting
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 8 on 45512: exiting
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 7 on 45512: exiting
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(498)) - Stopping IPC Server listener on 45512
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(1525)) - IPC Server handler 9 on 45512: exiting
    [junit] 2011-10-24 21:06:22,922 INFO  ipc.Server (Server.java:run(638)) - Stopping IPC Server Responder
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.117 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:764: Tests failed!

Total time: 118 minutes 29 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 103 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/103/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 18867 lines...]
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@10de6a6" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@4cec7" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@5d7490" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@13f2745" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@e7a256" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@748771" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1ddbdcd" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 165 minutes 48 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.fs.TestHDFSFileContextMainOperations.testEditsLogOldRename

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testZeroSizeFile

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 102 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/102/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 22049 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@3d1ccd" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1041911" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@2c86d2" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1df53c9" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@17a872b" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@b497e4" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@847976" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.793 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 122 minutes 10 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2491
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigs

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 101 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/101/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 26243 lines...]
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@6e2335" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@576165" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1e0a48e" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1508eed" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1d3f271" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@917c4d" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@74a830" java.lang.OutOfMemoryError: Pretend there's no more memory
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run_aroundBody1$advice(DataXceiver.java:36)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:1)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] Exception in thread "org.apache.hadoop.hdfs.server.datanode.DataXceiver@1d9a72f" 
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer FAILED (crashed)
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.932 sec

checkfailure:
    [touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/testsfailed

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:762: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:514: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:699: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:673: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:741: Tests failed!

Total time: 192 minutes 52 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2452
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFiDataXceiverServer.testOutOfMemoryErrorInDataXceiverServerRun

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
127.0.0.1:60986is not an underUtilized node: utilization=22.0 avgUtilization=22.0 threshold=10.0

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:60986is not an underUtilized node: utilization=22.0 avgUtilization=22.0 threshold=10.0
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1014)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1502)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.twoNodeTest(TestBalancer.java:312)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10ou(TestBalancer.java:328)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks.testSufficientlyReplBlocksUsesNewRack

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite.testRestartDFS

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 100 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/100/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3071 lines...]
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache

run-test-hdfs-excluding-commit-and-smoke:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          jar:file:/home/jenkins/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and jar:file:/home/jenkins/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
    [junit] Running org.apache.hadoop.fs.TestFiListPath
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.926 sec
    [junit] Running org.apache.hadoop.fs.TestFiRename
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 13.429 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHFlush
    [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 13.801 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHftp
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 34.574 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiPipelines
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.799 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
    [junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 212.434 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
    [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 315.829 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 35.002 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:764: Tests failed!

Total time: 132 minutes 37 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2451
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 99 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/99/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3074 lines...]
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache

run-test-hdfs-excluding-commit-and-smoke:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          jar:file:/home/jenkins/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and jar:file:/home/jenkins/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
    [junit] Running org.apache.hadoop.fs.TestFiListPath
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.969 sec
    [junit] Running org.apache.hadoop.fs.TestFiRename
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 26.378 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHFlush
    [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 13.705 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHftp
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 47.891 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiPipelines
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.714 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
    [junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 206.927 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
    [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 270.187 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 529.946 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:764: Tests failed!

Total time: 131 minutes 10 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2286
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0

Error Message:
127.0.0.1:45647is not an underUtilized node: utilization=24.0 avgUtilization=24.0 threshold=10.0

Stack Trace:
junit.framework.AssertionFailedError: 127.0.0.1:45647is not an underUtilized node: utilization=24.0 avgUtilization=24.0 threshold=10.0
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1014)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1502)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:307)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10ok(TestBalancer.java:327)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)


REGRESSION:  org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigs

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Hdfs-22-branch - Build # 98 - Still Failing

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/98/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3145 lines...]
   [delete] Deleting directory /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/cache

run-test-hdfs-excluding-commit-and-smoke:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/data
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/logs
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
     [copy] Copying 1 file to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build-fi/test/extraconf
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          jar:file:/home/jenkins/tools/ant/latest/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and jar:file:/home/jenkins/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
    [junit] Running org.apache.hadoop.fs.TestFiListPath
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.966 sec
    [junit] Running org.apache.hadoop.fs.TestFiRename
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 20.38 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHFlush
    [junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 13.822 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiHftp
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 40.366 sec
    [junit] Running org.apache.hadoop.hdfs.TestFiPipelines
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.868 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol
    [junit] Tests run: 29, Failures: 0, Errors: 0, Time elapsed: 207.147 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2
    [junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 304.495 sec
    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestFiPipelineClose
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 34.862 sec

checkfailure:

-run-test-hdfs-fault-inject-withtestcaseonly:

run-test-hdfs-fault-inject:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:764: Tests failed!

Total time: 144 minutes 0 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HDFS-2012
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
REGRESSION:  org.apache.hadoop.fs.permission.TestStickyBit.testMovingFiles

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.TestDFSClientRetries.testWriteTimeoutAtDataNode

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.hdfs.TestRestartDFS.testRestartDFS

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.hdfs.TestParallelRead.testParallelRead

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.