You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/02/05 14:25:08 UTC

Hadoop-Mapreduce-22-branch - Build # 28 - Still Failing

See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/28/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 207004 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-05 13:22:16,789 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,790 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,790 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,790 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,791 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,791 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,792 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,792 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,792 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,793 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,793 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,793 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,794 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,794 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,794 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,795 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,795 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,795 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,796 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-05 13:22:16,796 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.934 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.325 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.326 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: Tests failed!

Total time: 164 minutes 32 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-22-branch - Build # 32 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/32/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 208503 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-10 10:34:17,342 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,343 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,343 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,344 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,344 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,344 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,345 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,345 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,345 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,346 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,346 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,346 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,347 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,347 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,348 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,348 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,348 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,349 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,349 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-10 10:34:17,349 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.9 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.325 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: Tests failed!

Total time: 157 minutes 58 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Re: Hadoop-Mapreduce-22-branch - Build # 31 - Still Failing

Posted by Todd Lipcon <to...@cloudera.com>.
We are so close! Looks like some issue with javadoc generation now caused
the javadoc publish to fail:

javadoc:
    [mkdir] Created dir:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/docs/api
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] 1 error
  [javadoc] javadoc: error - Cannot find doclet class
org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet

Filed MAPREDUCE-2315

On Wed, Feb 9, 2011 at 6:14 PM, Apache Hudson Server <
hudson@hudson.apache.org> wrote:

> See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/31/
>
>
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE
> ###########################
> [...truncated 519341 lines...]
>
> test:
>
> clover.check:
>
> clover.setup:
> [clover-setup] Clover Version 3.0.2, built on April 13 2010 (build-790)
> [clover-setup] Loaded from:
> /homes/hudson/tools/clover/latest/lib/clover.jar
> [clover-setup] Clover: Open Source License registered to Apache.
> [clover-setup] Clover is enabled with initstring
> '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/db/hadoop_coverage.db'
>
> clover.info:
>
> clover:
>
> generate-clover-reports:
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/reports
> [clover-report] Clover Version 3.0.2, built on April 13 2010 (build-790)
> [clover-report] Loaded from:
> /homes/hudson/tools/clover/latest/lib/clover.jar
> [clover-report] Clover: Open Source License registered to Apache.
> [clover-report] Loading coverage database from:
> '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/db/hadoop_coverage.db'
> [clover-report] Writing HTML report to
> '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/reports'
> Fontconfig error: Cannot load default config file
> [clover-report] Done. Processed 44 packages in 9123ms (207ms per package).
> [clover-report] Clover Version 3.0.2, built on April 13 2010 (build-790)
> [clover-report] Loaded from:
> /homes/hudson/tools/clover/latest/lib/clover.jar
> [clover-report] Clover: Open Source License registered to Apache.
> [clover-report] Loading coverage database from:
> '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/db/hadoop_coverage.db'
> [clover-report] Writing report to
> '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/reports/clover.xml'
>
> BUILD SUCCESSFUL
> Total time: 208 minutes 10 seconds
> [FINDBUGS] Collecting findbugs analysis files...
> Recording fingerprints
> Archiving artifacts
> Recording test results
> Publishing Javadoc
> ERROR: Publisher hudson.tasks.JavadocArchiver aborted due to exception
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/docs/api
> does not exist.
>        at
> org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:474)
>        at hudson.FilePath$34.hasMatch(FilePath.java:1745)
>        at hudson.FilePath$34.invoke(FilePath.java:1654)
>        at hudson.FilePath$34.invoke(FilePath.java:1645)
>        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1931)
>        at hudson.remoting.UserRequest.perform(UserRequest.java:114)
>        at hudson.remoting.UserRequest.perform(UserRequest.java:48)
>        at hudson.remoting.Request$2.run(Request.java:270)
>        at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>        at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:619)
> Publishing Clover coverage report...
> Publishing Clover HTML report...
> Publishing Clover XML report...
> Publishing Clover coverage results...
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> ###################################################################################
> ############################## FAILED TESTS (if any)
> ##############################
> All tests passed
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Hadoop-Mapreduce-22-branch - Build # 31 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/31/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 519341 lines...]

test:

clover.check:

clover.setup:
[clover-setup] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-setup] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-setup] Clover: Open Source License registered to Apache.
[clover-setup] Clover is enabled with initstring '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/db/hadoop_coverage.db'

clover.info:

clover:

generate-clover-reports:
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/reports
[clover-report] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-report] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-report] Clover: Open Source License registered to Apache.
[clover-report] Loading coverage database from: '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/db/hadoop_coverage.db'
[clover-report] Writing HTML report to '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/reports'
Fontconfig error: Cannot load default config file
[clover-report] Done. Processed 44 packages in 9123ms (207ms per package).
[clover-report] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-report] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-report] Clover: Open Source License registered to Apache.
[clover-report] Loading coverage database from: '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/db/hadoop_coverage.db'
[clover-report] Writing report to '/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/clover/reports/clover.xml'

BUILD SUCCESSFUL
Total time: 208 minutes 10 seconds
[FINDBUGS] Collecting findbugs analysis files...
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
ERROR: Publisher hudson.tasks.JavadocArchiver aborted due to exception
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/docs/api does not exist.
	at org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:474)
	at hudson.FilePath$34.hasMatch(FilePath.java:1745)
	at hudson.FilePath$34.invoke(FilePath.java:1654)
	at hudson.FilePath$34.invoke(FilePath.java:1645)
	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1931)
	at hudson.remoting.UserRequest.perform(UserRequest.java:114)
	at hudson.remoting.UserRequest.perform(UserRequest.java:48)
	at hudson.remoting.Request$2.run(Request.java:270)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:619)
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Hadoop-Mapreduce-22-branch - Build # 30 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/30/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 207623 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-09 10:47:14,474 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,475 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,475 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,475 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,476 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,476 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,476 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,477 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,477 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,477 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,478 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,478 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,479 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,479 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,479 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,480 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,480 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,480 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,481 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-09 10:47:14,481 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.942 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.326 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: Tests failed!

Total time: 166 minutes 36 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-22-branch - Build # 29 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/29/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 520602 lines...]
    [junit] 11/02/07 02:16:25 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/07 02:16:25 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/07 02:16:25 INFO ipc.Server: Stopping server on 57740
    [junit] 11/02/07 02:16:25 INFO ipc.Server: IPC Server handler 0 on 57740: exiting
    [junit] 11/02/07 02:16:25 INFO ipc.Server: IPC Server handler 1 on 57740: exiting
    [junit] 11/02/07 02:16:25 INFO ipc.Server: IPC Server handler 2 on 57740: exiting
    [junit] 11/02/07 02:16:25 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/07 02:16:25 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/07 02:16:25 INFO ipc.Server: Stopping IPC Server listener on 57740
    [junit] 11/02/07 02:16:25 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:37820, storageID=DS-752266506-127.0.1.1-37820-1297044984435, infoPort=46280, ipcPort=57740):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/07 02:16:25 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/07 02:16:25 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/07 02:16:25 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:37820, storageID=DS-752266506-127.0.1.1-37820-1297044984435, infoPort=46280, ipcPort=57740):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/07 02:16:25 INFO ipc.Server: Stopping server on 57740
    [junit] 11/02/07 02:16:25 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/07 02:16:25 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/07 02:16:25 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/07 02:16:25 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/07 02:16:26 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/07 02:16:26 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 10 5 
    [junit] 11/02/07 02:16:26 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/07 02:16:26 INFO ipc.Server: Stopping server on 33445
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 0 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 2 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 3 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 5 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 9 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 4 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 7 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 1 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: Stopping IPC Server listener on 33445
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.543 sec
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 6 on 33445: exiting
    [junit] 11/02/07 02:16:26 INFO ipc.Server: IPC Server handler 8 on 33445: exiting

test:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:73: Tests failed!

Total time: 208 minutes 32 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
33 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1381)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1545)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1571)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1609)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1670)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1694)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1762)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1840)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1919)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2023)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2111)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2185)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2239)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2291)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2339)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2440)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2507)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2555)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2593)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2634)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2667)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2739)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2778)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.