You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/01/02 00:09:12 UTC

Hadoop-Mapreduce-trunk - Build # 547 - Still Failing

See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/547/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 210928 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-01 23:12:00,953 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,954 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,954 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,955 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,955 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,955 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,956 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,956 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,956 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-01 23:12:00,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 4.846 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.566 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.519 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 337 minutes 42 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
8 tests failed.
FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestJobHistory.testJobHistoryFile

Error Message:
Config for completed jobs doesnt exist

Stack Trace:
junit.framework.AssertionFailedError: Config for completed jobs doesnt exist
	at org.apache.hadoop.mapred.TestJobHistory.__CLR3_0_2cs8b1wktf(TestJobHistory.java:791)
	at org.apache.hadoop.mapred.TestJobHistory.testJobHistoryFile(TestJobHistory.java:733)


FAILED:  org.apache.hadoop.mapred.TestJobRetire.testJobRetireWithUnreportedTasks

Error Message:
Job did not retire

Stack Trace:
junit.framework.AssertionFailedError: Job did not retire
	at org.apache.hadoop.mapred.TestJobRetire.waitTillRetire(TestJobRetire.java:170)
	at org.apache.hadoop.mapred.TestJobRetire.__CLR3_0_2eupjgdrch(TestJobRetire.java:275)
	at org.apache.hadoop.mapred.TestJobRetire.testJobRetireWithUnreportedTasks(TestJobRetire.java:208)


FAILED:  org.apache.hadoop.mapred.TestJvmManager.testJvmKill

Error Message:
pidFile is not present

Stack Trace:
junit.framework.AssertionFailedError: pidFile is not present
	at org.apache.hadoop.mapred.TestJvmManager.__CLR3_0_2wql8rwsfq(TestJvmManager.java:145)
	at org.apache.hadoop.mapred.TestJvmManager.testJvmKill(TestJvmManager.java:104)


FAILED:  org.apache.hadoop.mapred.TestMultiFileInputFormat.testFormat

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestWebUIAuthorization.testAuthorizationForJobHistoryPages

Error Message:
Incorrect return code for job submitter user1 expected:<200> but was:<500>

Stack Trace:
junit.framework.AssertionFailedError: Incorrect return code for job submitter user1 expected:<200> but was:<500>
	at org.apache.hadoop.mapred.TestWebUIAuthorization.validateViewJob(TestWebUIAuthorization.java:142)
	at org.apache.hadoop.mapred.TestWebUIAuthorization.__CLR3_0_21xxnoycpw(TestWebUIAuthorization.java:323)
	at org.apache.hadoop.mapred.TestWebUIAuthorization.testAuthorizationForJobHistoryPages(TestWebUIAuthorization.java:256)


FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.tools.TestCopyFiles.testGlobbing

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 599 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/599/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4044 lines...]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;${jackson.version} by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;${jackson.version} by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [common]
[ivy:resolve] 	com.thoughtworks.paranamer#paranamer;${paranamer.version} by [com.thoughtworks.paranamer#paranamer;2.2] in [common]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|      common      |   42  |   2   |   0   |   8   ||   34  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#raid [sync]
[ivy:retrieve] 	confs: [common]
[ivy:retrieve] 	34 artifacts copied, 0 already retrieved (13238kB/49ms)
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivysettings.xml

compile:
     [echo] contrib: raid
    [javac] Compiling 32 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/classes
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:50: org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid is not abstract and does not override abstract method chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,boolean,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy
    [javac] public class BlockPlacementPolicyRaid extends BlockPlacementPolicy {
    [javac]        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:109: chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid cannot override chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy; overridden method is final
    [javac]   DatanodeDescriptor[] chooseTarget(String srcPath, int numOfReplicas,
    [javac]                        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:118: cannot find symbol
    [javac] symbol  : method chooseTarget(int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long)
    [javac] location: class org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyDefault
    [javac]         defaultPolicy.chooseTarget(numOfReplicas, writer,
    [javac]                      ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 3 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:432: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:193: Compile failed; see the compiler error output for details.

Total time: 2 minutes 12 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 598 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/598/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4076 lines...]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;${jackson.version} by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;${jackson.version} by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [common]
[ivy:resolve] 	com.thoughtworks.paranamer#paranamer;${paranamer.version} by [com.thoughtworks.paranamer#paranamer;2.2] in [common]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|      common      |   42  |   2   |   0   |   8   ||   34  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#raid [sync]
[ivy:retrieve] 	confs: [common]
[ivy:retrieve] 	34 artifacts copied, 0 already retrieved (13238kB/53ms)
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivysettings.xml

compile:
     [echo] contrib: raid
    [javac] Compiling 32 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/classes
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:50: org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid is not abstract and does not override abstract method chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,boolean,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy
    [javac] public class BlockPlacementPolicyRaid extends BlockPlacementPolicy {
    [javac]        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:109: chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid cannot override chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy; overridden method is final
    [javac]   DatanodeDescriptor[] chooseTarget(String srcPath, int numOfReplicas,
    [javac]                        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:118: cannot find symbol
    [javac] symbol  : method chooseTarget(int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long)
    [javac] location: class org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyDefault
    [javac]         defaultPolicy.chooseTarget(numOfReplicas, writer,
    [javac]                      ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 3 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:432: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:193: Compile failed; see the compiler error output for details.

Total time: 2 minutes 26 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 597 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/597/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212900 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-16 15:57:06,739 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,739 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,740 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,740 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,740 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,741 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,741 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,741 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,742 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,742 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,742 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,743 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,743 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,743 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,744 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,744 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,744 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,745 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,745 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,746 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.008 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.34 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.294 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 169 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 596 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/596/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212799 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-15 16:16:22,448 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,449 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,449 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,450 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,450 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,450 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,451 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,451 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,451 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,452 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,452 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,452 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,453 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,453 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,453 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,454 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,454 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,454 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,455 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,455 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.003 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.369 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.293 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 188 minutes 41 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 595 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/595/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 211644 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-14 16:00:14,239 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,239 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,240 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,240 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,240 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,241 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,241 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,241 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,242 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,242 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,242 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,243 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,243 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,243 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,244 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,244 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,244 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,245 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,245 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,245 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.976 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.362 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.321 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 173 minutes 7 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 594 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/594/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 325849 lines...]
    [junit] 11/02/13 18:17:36 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/13 18:17:36 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/13 18:17:36 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping server on 42173
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 1 on 42173: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server listener on 42173
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/13 18:17:36 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:48199, storageID=DS-594520111-127.0.1.1-48199-1297621055411, infoPort=51980, ipcPort=42173):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 0 on 42173: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 2 on 42173: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/13 18:17:36 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:48199, storageID=DS-594520111-127.0.1.1-48199-1297621055411, infoPort=51980, ipcPort=42173):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping server on 42173
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/13 18:17:36 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/13 18:17:36 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/13 18:17:36 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/13 18:17:36 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/13 18:17:36 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/13 18:17:36 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 9 4 
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping server on 34454
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 0 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 1 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 2 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 5 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server listener on 34454
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 6 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 8 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 9 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 7 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 3 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 4 on 34454: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.165 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 310 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 593 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/593/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 316357 lines...]
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/12 18:24:48 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/12 18:24:48 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping server on 60757
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 0 on 60757: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 2 on 60757: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server listener on 60757
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 1 on 60757: exiting
    [junit] 11/02/12 18:24:48 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:55804, storageID=DS-497933803-127.0.1.1-55804-1297535087140, infoPort=50236, ipcPort=60757):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/02/12 18:24:48 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/12 18:24:48 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/12 18:24:48 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:55804, storageID=DS-497933803-127.0.1.1-55804-1297535087140, infoPort=50236, ipcPort=60757):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping server on 60757
    [junit] 11/02/12 18:24:48 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/12 18:24:48 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/12 18:24:48 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/12 18:24:48 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 7 5 
    [junit] 11/02/12 18:24:48 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping server on 51760
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 0 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 3 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 1 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 2 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 4 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 5 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 8 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 7 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 6 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server listener on 51760
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 9 on 51760: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.184 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 316 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.streaming.TestMultipleCachefiles.testMultipleCachefiles

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 592 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/592/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212778 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-11 15:55:10,255 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,255 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,256 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,256 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,256 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,257 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,257 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,258 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,258 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,258 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,259 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,259 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,259 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,260 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,260 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.912 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.332 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.316 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 167 minutes 25 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 591 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/591/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 318093 lines...]
    [junit] 11/02/10 18:10:39 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/10 18:10:39 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/10 18:10:39 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping server on 37712
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 1 on 37712: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 0 on 37712: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/10 18:10:39 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:33346, storageID=DS-1372034669-127.0.1.1-33346-1297361437854, infoPort=60239, ipcPort=37712):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server listener on 37712
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 2 on 37712: exiting
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/10 18:10:39 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:33346, storageID=DS-1372034669-127.0.1.1-33346-1297361437854, infoPort=60239, ipcPort=37712):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping server on 37712
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/10 18:10:39 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/10 18:10:39 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/10 18:10:39 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/10 18:10:39 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/10 18:10:39 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 5 5 
    [junit] 11/02/10 18:10:39 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping server on 49558
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 0 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 2 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 3 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 6 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server listener on 49558
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 9 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 7 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 1 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 5 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 8 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 4 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server Responder
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.468 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 300 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 590 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/590/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213363 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-10 10:43:35,840 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,840 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,841 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,841 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,841 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,842 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,842 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,842 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,843 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,843 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,843 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,844 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,844 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,845 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,845 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,845 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,846 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,846 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,846 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,847 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.979 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.308 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 170 minutes 1 second
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 589 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/589/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 319831 lines...]
    [junit] 11/02/09 18:20:01 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/09 18:20:01 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/09 18:20:01 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping server on 56258
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 0 on 56258: exiting
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 1 on 56258: exiting
    [junit] 11/02/09 18:20:02 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:60658, storageID=DS-1133570672-127.0.1.1-60658-1297275600631, infoPort=42252, ipcPort=56258):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 2 on 56258: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server listener on 56258
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/09 18:20:02 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:60658, storageID=DS-1133570672-127.0.1.1-60658-1297275600631, infoPort=42252, ipcPort=56258):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping server on 56258
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/09 18:20:02 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/09 18:20:02 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/09 18:20:02 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/09 18:20:02 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/09 18:20:02 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 8 6 
    [junit] 11/02/09 18:20:02 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping server on 40221
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 0 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 1 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server listener on 40221
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 2 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 3 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 5 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 4 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 6 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 9 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 7 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 8 on 40221: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.608 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 312 minutes 36 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
37 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1369)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1448)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1549)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1575)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1613)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1674)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1698)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1766)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1844)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1923)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2115)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2189)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2243)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2295)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2343)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2511)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2559)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2597)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2638)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2671)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2782)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 588 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/588/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 319048 lines...]
    [junit] 11/02/08 18:21:08 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/08 18:21:08 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/08 18:21:08 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping server on 37595
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 0 on 37595: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 2 on 37595: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server listener on 37595
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 1 on 37595: exiting
    [junit] 11/02/08 18:21:08 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:51243, storageID=DS-1530136067-127.0.1.1-51243-1297189267407, infoPort=45444, ipcPort=37595):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/08 18:21:08 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:51243, storageID=DS-1530136067-127.0.1.1-51243-1297189267407, infoPort=45444, ipcPort=37595):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping server on 37595
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/08 18:21:08 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/08 18:21:08 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/08 18:21:08 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/08 18:21:08 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/08 18:21:08 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 9 6 
    [junit] 11/02/08 18:21:08 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping server on 45944
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 0 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 2 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server listener on 45944
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 1 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 5 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 3 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 4 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 8 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 9 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 7 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 6 on 45944: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.553 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 313 minutes 27 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
37 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1369)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1448)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1549)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1575)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1613)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1674)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1698)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1766)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1844)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1923)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2115)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2189)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2243)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2295)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2343)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2511)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2559)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2597)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2638)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2671)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2782)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 587 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/587/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213527 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-07 15:57:54,246 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,246 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.999 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.305 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 170 minutes 24 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 586 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/586/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 228582 lines...]
    [javac]     assertFalse(new Pair<Integer, Integer>(
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:54: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:74: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:80: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(Integer.valueOf(VAL_A), null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:92: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:98: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, Integer.valueOf(VAL_A))
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:117: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(0 == new Pair<Integer, Integer>(Integer.valueOf(VAL_A),
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:125: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(0 == new Pair<Integer, Integer>(Integer.valueOf(VAL_A),
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 22 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 192 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
33 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1369)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1448)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1549)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1575)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1613)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1674)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1698)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1766)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1844)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1923)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2115)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2189)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2243)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2295)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2343)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2511)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2559)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2597)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2638)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2671)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2782)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 585 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/585/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 214219 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-05 15:42:31,035 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,036 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,036 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,037 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,037 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,038 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,039 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,039 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,039 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,040 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,040 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,041 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,041 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,041 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,042 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,042 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,042 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,043 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,043 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,044 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.045 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 154 minutes 55 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 584 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/584/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213816 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-04 15:55:08,077 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,078 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,078 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,078 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,079 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,079 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,079 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,080 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,080 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,080 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,081 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,081 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,081 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,082 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,082 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,082 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,083 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,083 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,083 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,084 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.98 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.303 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.307 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 167 minutes 29 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 583 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/583/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213623 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-04 00:07:40,002 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,002 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,002 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,003 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,003 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,003 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,004 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,004 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,005 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,005 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,005 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,006 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,006 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,006 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,007 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,007 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,007 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,008 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,008 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,008 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.916 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.322 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.317 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 162 minutes 21 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 582 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/582/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213981 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-03 15:44:28,463 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,463 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,464 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,464 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,464 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,465 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,465 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,465 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,466 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,466 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,466 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,467 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,467 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,467 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,468 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,468 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,469 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,469 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,469 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,470 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.955 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.35 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.318 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 156 minutes 43 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:241)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:422)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:368)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:333)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:461)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:442)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken$1.run(TestUmbilicalProtocolWithJobToken.java:102)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1142)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.__CLR3_0_2ky5ls2wkg(TestUmbilicalProtocolWithJobToken.java:97)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc(TestUmbilicalProtocolWithJobToken.java:75)




Hadoop-Mapreduce-trunk - Build # 581 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/581/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 215114 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-02 15:50:21,956 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,961 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,961 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,961 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,962 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,962 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,962 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,963 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.097 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.315 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.295 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 162 minutes 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:241)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:422)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:368)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:333)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:461)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:442)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken$1.run(TestUmbilicalProtocolWithJobToken.java:102)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1142)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.__CLR3_0_2ky5ls2wkg(TestUmbilicalProtocolWithJobToken.java:97)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc(TestUmbilicalProtocolWithJobToken.java:75)




Hadoop-Mapreduce-trunk - Build # 580 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/580/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6304 lines...]
[ivy:resolve] 	found org.aspectj#aspectjrt;1.6.5 in maven2
[ivy:resolve] 	found org.aspectj#aspectjtools;1.6.5 in maven2
[ivy:resolve] 	found org.apache.hadoop#hadoop-hdfs-test;0.23.0-SNAPSHOT in apache-snapshot
[ivy:resolve] :: resolution report :: resolve 496ms :: artifacts dl 15ms
[ivy:resolve] 	:: evicted modules:
[ivy:resolve] 	commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M4 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftplet-api;1.0.0-M2 by [org.apache.ftpserver#ftplet-api;1.0.0] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftpserver-core;1.0.0-M2 by [org.apache.ftpserver#ftpserver-core;1.0.0] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M2 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;1.0.1 by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;1.0.1 by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [test]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [test]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|       test       |   54  |   4   |   0   |   12  ||   42  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-test:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#mumak [sync]
[ivy:retrieve] 	confs: [test]
[ivy:retrieve] 	42 artifacts copied, 0 already retrieved (24336kB/93ms)

compile-test:
     [echo] contrib: mumak
    [javac] Compiling 15 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/mumak/test
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java:56: org.apache.hadoop.mapred.MockSimulatorJobTracker is not abstract and does not override abstract method getProtocolSignature(java.lang.String,long,int) in org.apache.hadoop.ipc.VersionedProtocol
    [javac] public class MockSimulatorJobTracker implements InterTrackerProtocol,
    [javac]        ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 1 error

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1149: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 3 minutes 9 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 579 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/579/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6328 lines...]
[ivy:resolve] 	found org.aspectj#aspectjrt;1.6.5 in maven2
[ivy:resolve] 	found org.aspectj#aspectjtools;1.6.5 in maven2
[ivy:resolve] 	found org.apache.hadoop#hadoop-hdfs-test;0.23.0-SNAPSHOT in apache-snapshot
[ivy:resolve] :: resolution report :: resolve 384ms :: artifacts dl 16ms
[ivy:resolve] 	:: evicted modules:
[ivy:resolve] 	commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M4 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftplet-api;1.0.0-M2 by [org.apache.ftpserver#ftplet-api;1.0.0] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftpserver-core;1.0.0-M2 by [org.apache.ftpserver#ftpserver-core;1.0.0] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M2 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;1.0.1 by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;1.0.1 by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [test]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [test]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|       test       |   54  |   4   |   0   |   12  ||   42  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-test:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#mumak [sync]
[ivy:retrieve] 	confs: [test]
[ivy:retrieve] 	42 artifacts copied, 0 already retrieved (24335kB/86ms)

compile-test:
     [echo] contrib: mumak
    [javac] Compiling 15 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/mumak/test
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java:56: org.apache.hadoop.mapred.MockSimulatorJobTracker is not abstract and does not override abstract method getProtocolSignature(java.lang.String,long,int) in org.apache.hadoop.ipc.VersionedProtocol
    [javac] public class MockSimulatorJobTracker implements InterTrackerProtocol,
    [javac]        ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 1 error

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1149: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 3 minutes 37 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 578 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/578/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6347 lines...]
[ivy:resolve] 	found org.aspectj#aspectjrt;1.6.5 in maven2
[ivy:resolve] 	found org.aspectj#aspectjtools;1.6.5 in maven2
[ivy:resolve] 	found org.apache.hadoop#hadoop-hdfs-test;0.23.0-SNAPSHOT in apache-snapshot
[ivy:resolve] :: resolution report :: resolve 414ms :: artifacts dl 12ms
[ivy:resolve] 	:: evicted modules:
[ivy:resolve] 	commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M4 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftplet-api;1.0.0-M2 by [org.apache.ftpserver#ftplet-api;1.0.0] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftpserver-core;1.0.0-M2 by [org.apache.ftpserver#ftpserver-core;1.0.0] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M2 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;1.0.1 by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;1.0.1 by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [test]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [test]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|       test       |   54  |   4   |   0   |   12  ||   42  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-test:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#mumak [sync]
[ivy:retrieve] 	confs: [test]
[ivy:retrieve] 	42 artifacts copied, 0 already retrieved (24335kB/83ms)

compile-test:
     [echo] contrib: mumak
    [javac] Compiling 15 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/mumak/test
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java:56: org.apache.hadoop.mapred.MockSimulatorJobTracker is not abstract and does not override abstract method getProtocolSignature(java.lang.String,long,int) in org.apache.hadoop.ipc.VersionedProtocol
    [javac] public class MockSimulatorJobTracker implements InterTrackerProtocol,
    [javac]        ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 1 error

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1149: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 3 minutes 36 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 577 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/577/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2585 lines...]
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/JobTracker.java:327: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:406: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.TaskTracker
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:403: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:64: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.IsolationRunner.FakeUmbilical
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:61: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:96: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:93: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:136: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner.Job
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:133: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 19 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:394: Compile failed; see the compiler error output for details.

Total time: 42 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 576 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/576/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2585 lines...]
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/JobTracker.java:327: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:406: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.TaskTracker
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:403: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:64: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.IsolationRunner.FakeUmbilical
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:61: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:96: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:93: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:136: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner.Job
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:133: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 19 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:394: Compile failed; see the compiler error output for details.

Total time: 42 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 575 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/575/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2584 lines...]
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/JobTracker.java:327: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:406: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.TaskTracker
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:403: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:64: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.IsolationRunner.FakeUmbilical
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:61: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:96: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:93: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:136: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner.Job
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:133: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 19 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:394: Compile failed; see the compiler error output for details.

Total time: 42 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Re: Hadoop-Mapreduce-trunk - Build # 574 - Still Failing

Posted by Giridharan Kesavan <gk...@yahoo-inc.com>.
Its the common build that failed to publish. I ve it published manually and triggered the mapreduce trunk build. 

I may have to restart the common build slave and trigger the common build to publish artifacts automatically.

On Jan 28, 2011, at 10:28 AM, Todd Lipcon wrote:

> These are caused by HADOOP-7118, which was committed to common trunk on
> Wednesday. Again seems like MR build isn't pulling the Common artifacts (or
> Common isn't publishing them)
> 
> On Fri, Jan 28, 2011 at 8:13 AM, Apache Hudson Server <
> hudson@hudson.apache.org> wrote:
> 
>> See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/574/
>> 
>> 
>> ###################################################################################
>> ########################## LAST 60 LINES OF THE CONSOLE
>> ###########################
>> [...truncated 225572 lines...]
>>   [javac]     assertFalse(new Pair<Integer, Integer>(
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:54:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(new Pair<Integer, Integer>(
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:74:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:80:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(new Pair<Integer,
>> Integer>(Integer.valueOf(VAL_A), null)
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:92:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:98:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(new Pair<Integer, Integer>(null,
>> Integer.valueOf(VAL_A))
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:117:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(0 == new Pair<Integer,
>> Integer>(Integer.valueOf(VAL_A),
>>   [javac]     ^
>>   [javac]
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:125:
>> cannot find symbol
>>   [javac] symbol  : method assertFalse(boolean)
>>   [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>>   [javac]     assertFalse(0 == new Pair<Integer,
>> Integer>(Integer.valueOf(VAL_A),
>>   [javac]     ^
>>   [javac] Note: Some input files use or override a deprecated API.
>>   [javac] Note: Recompile with -Xlint:deprecation for details.
>>   [javac] Note: Some input files use unchecked or unsafe operations.
>>   [javac] Note: Recompile with -Xlint:unchecked for details.
>>   [javac] 22 errors
>> 
>> BUILD FAILED
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821:
>> The following error occurred while executing this line:
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805:
>> The following error occurred while executing this line:
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60:
>> The following error occurred while executing this line:
>> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229:
>> Compile failed; see the compiler error output for details.
>> 
>> Total time: 184 minutes 23 seconds
>> [FINDBUGS] Skipping publisher since build result is FAILURE
>> Publishing Javadoc
>> Archiving artifacts
>> Recording test results
>> Recording fingerprints
>> Publishing Clover coverage report...
>> No Clover report will be published due to a Build Failure
>> Email was triggered for: Failure
>> Sending email for trigger: Failure
>> 
>> 
>> 
>> 
>> ###################################################################################
>> ############################## FAILED TESTS (if any)
>> ##############################
>> 2 tests failed.
>> FAILED:
>> org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testFailingJobInitalization
>> 
>> Error Message:
>> null
>> 
>> Stack Trace:
>> java.lang.NullPointerException
>>       at
>> org.apache.hadoop.conf.Configuration.asXmlDocument(Configuration.java:1624)
>>       at
>> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1592)
>>       at
>> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1582)
>>       at
>> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.setUpSchedulerConfigFile(ClusterWithCapacityScheduler.java:132)
>>       at
>> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.startCluster(ClusterWithCapacityScheduler.java:103)
>>       at
>> org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testFailingJobInitalization(TestCapacitySchedulerWithJobTracker.java:48)
>> 
>> 
>> FAILED:
>> org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration
>> 
>> Error Message:
>> null
>> 
>> Stack Trace:
>> java.lang.NullPointerException
>>       at
>> org.apache.hadoop.conf.Configuration.asXmlDocument(Configuration.java:1624)
>>       at
>> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1592)
>>       at
>> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1582)
>>       at
>> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.setUpSchedulerConfigFile(ClusterWithCapacityScheduler.java:132)
>>       at
>> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.startCluster(ClusterWithCapacityScheduler.java:103)
>>       at
>> org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration(TestCapacitySchedulerWithJobTracker.java:92)
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Todd Lipcon
> Software Engineer, Cloudera


Re: Hadoop-Mapreduce-trunk - Build # 574 - Still Failing

Posted by Todd Lipcon <to...@cloudera.com>.
These are caused by HADOOP-7118, which was committed to common trunk on
Wednesday. Again seems like MR build isn't pulling the Common artifacts (or
Common isn't publishing them)

On Fri, Jan 28, 2011 at 8:13 AM, Apache Hudson Server <
hudson@hudson.apache.org> wrote:

> See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/574/
>
>
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE
> ###########################
> [...truncated 225572 lines...]
>    [javac]     assertFalse(new Pair<Integer, Integer>(
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:54:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(new Pair<Integer, Integer>(
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:74:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:80:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(new Pair<Integer,
> Integer>(Integer.valueOf(VAL_A), null)
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:92:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:98:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(new Pair<Integer, Integer>(null,
> Integer.valueOf(VAL_A))
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:117:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(0 == new Pair<Integer,
> Integer>(Integer.valueOf(VAL_A),
>    [javac]     ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:125:
> cannot find symbol
>    [javac] symbol  : method assertFalse(boolean)
>    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
>    [javac]     assertFalse(0 == new Pair<Integer,
> Integer>(Integer.valueOf(VAL_A),
>    [javac]     ^
>    [javac] Note: Some input files use or override a deprecated API.
>    [javac] Note: Recompile with -Xlint:deprecation for details.
>    [javac] Note: Some input files use unchecked or unsafe operations.
>    [javac] Note: Recompile with -Xlint:unchecked for details.
>    [javac] 22 errors
>
> BUILD FAILED
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229:
> Compile failed; see the compiler error output for details.
>
> Total time: 184 minutes 23 seconds
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Publishing Javadoc
> Archiving artifacts
> Recording test results
> Recording fingerprints
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> ###################################################################################
> ############################## FAILED TESTS (if any)
> ##############################
> 2 tests failed.
> FAILED:
>  org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testFailingJobInitalization
>
> Error Message:
> null
>
> Stack Trace:
> java.lang.NullPointerException
>        at
> org.apache.hadoop.conf.Configuration.asXmlDocument(Configuration.java:1624)
>        at
> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1592)
>        at
> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1582)
>        at
> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.setUpSchedulerConfigFile(ClusterWithCapacityScheduler.java:132)
>        at
> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.startCluster(ClusterWithCapacityScheduler.java:103)
>        at
> org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testFailingJobInitalization(TestCapacitySchedulerWithJobTracker.java:48)
>
>
> FAILED:
>  org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration
>
> Error Message:
> null
>
> Stack Trace:
> java.lang.NullPointerException
>        at
> org.apache.hadoop.conf.Configuration.asXmlDocument(Configuration.java:1624)
>        at
> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1592)
>        at
> org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1582)
>        at
> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.setUpSchedulerConfigFile(ClusterWithCapacityScheduler.java:132)
>        at
> org.apache.hadoop.mapred.ClusterWithCapacityScheduler.startCluster(ClusterWithCapacityScheduler.java:103)
>        at
> org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration(TestCapacitySchedulerWithJobTracker.java:92)
>
>
>
>


-- 
Todd Lipcon
Software Engineer, Cloudera

Hadoop-Mapreduce-trunk - Build # 574 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/574/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 225572 lines...]
    [javac]     assertFalse(new Pair<Integer, Integer>(
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:54: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:74: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:80: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(Integer.valueOf(VAL_A), null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:92: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:98: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, Integer.valueOf(VAL_A))
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:117: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(0 == new Pair<Integer, Integer>(Integer.valueOf(VAL_A),
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:125: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(0 == new Pair<Integer, Integer>(Integer.valueOf(VAL_A),
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 22 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 184 minutes 23 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testFailingJobInitalization

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.conf.Configuration.asXmlDocument(Configuration.java:1624)
	at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1592)
	at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1582)
	at org.apache.hadoop.mapred.ClusterWithCapacityScheduler.setUpSchedulerConfigFile(ClusterWithCapacityScheduler.java:132)
	at org.apache.hadoop.mapred.ClusterWithCapacityScheduler.startCluster(ClusterWithCapacityScheduler.java:103)
	at org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testFailingJobInitalization(TestCapacitySchedulerWithJobTracker.java:48)


FAILED:  org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.conf.Configuration.asXmlDocument(Configuration.java:1624)
	at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1592)
	at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1582)
	at org.apache.hadoop.mapred.ClusterWithCapacityScheduler.setUpSchedulerConfigFile(ClusterWithCapacityScheduler.java:132)
	at org.apache.hadoop.mapred.ClusterWithCapacityScheduler.startCluster(ClusterWithCapacityScheduler.java:103)
	at org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration(TestCapacitySchedulerWithJobTracker.java:92)




Hadoop-Mapreduce-trunk - Build # 573 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/573/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 211357 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-27 15:59:30,778 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,779 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,779 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,779 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,780 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,780 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,780 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,781 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,781 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,781 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,782 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,782 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,783 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,783 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,783 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,784 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,784 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,784 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,785 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-27 15:59:30,785 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.95 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.306 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 173 minutes 3 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 572 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/572/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 208575 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-26 15:42:17,119 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,119 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,119 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,120 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,120 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,120 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,121 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,121 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,122 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,122 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,122 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,123 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,123 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,123 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,124 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,124 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,124 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-26 15:42:17,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.909 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.328 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.303 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 154 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7olelu(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)




Hadoop-Mapreduce-trunk - Build # 571 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/571/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 209701 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-25 15:42:07,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,126 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,126 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,127 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,127 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,127 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,128 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,128 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,128 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,129 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,129 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,129 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,130 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,130 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,130 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,131 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,131 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,131 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,132 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-25 15:42:07,132 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.017 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.335 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.307 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 155 minutes 45 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7olelu(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)




Hadoop-Mapreduce-trunk - Build # 570 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/570/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 208428 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-24 15:38:32,762 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,763 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,763 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,764 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,764 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,764 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,765 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,765 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,766 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,766 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,766 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,767 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,767 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,767 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,768 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,768 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,768 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,769 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,769 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-24 15:38:32,770 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.928 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.321 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.28 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 152 minutes 7 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)




Hadoop-Mapreduce-trunk - Build # 569 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/569/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 208962 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-23 15:48:42,361 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,361 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,362 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,362 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,362 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,363 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,363 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,363 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,364 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,364 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,364 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,365 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,365 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,365 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,366 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,366 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,366 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,367 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,367 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-23 15:48:42,367 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.939 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.334 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.324 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 162 minutes 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)




Hadoop-Mapreduce-trunk - Build # 568 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/568/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5074 lines...]
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:147: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]       conf2.set(JTConfig.JT_IPC_ADDRESS, TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                                                             ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:154: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]       conf2.set(JTConfig.JT_IPC_ADDRESS, TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                                                             ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:156: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HTTP_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HTTP_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:187: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HTTP_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HTTP_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:193: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:201: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:203: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HTTP_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HTTP_HOST + 0);
    [javac]                            ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 11 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:533: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:602: Compile failed; see the compiler error output for details.

Total time: 2 minutes 47 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 567 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/567/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5072 lines...]
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:147: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]       conf2.set(JTConfig.JT_IPC_ADDRESS, TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                                                             ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:154: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]       conf2.set(JTConfig.JT_IPC_ADDRESS, TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                                                             ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:156: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HTTP_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HTTP_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:187: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HTTP_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HTTP_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:193: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:201: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HOST + 0);
    [javac]                            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestMRServerPorts.java:203: cannot find symbol
    [javac] symbol  : variable NAME_NODE_HTTP_HOST
    [javac] location: class org.apache.hadoop.hdfs.TestHDFSServerPorts
    [javac]         TestHDFSServerPorts.NAME_NODE_HTTP_HOST + 0);
    [javac]                            ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 11 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:533: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:602: Compile failed; see the compiler error output for details.

Total time: 2 minutes 45 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 566 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/566/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 205606 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-20 15:38:41,223 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,223 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,224 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,224 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,224 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,225 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,225 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,225 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,226 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,226 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,227 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,227 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,227 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,228 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,228 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,228 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,229 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,229 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,229 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-20 15:38:41,230 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.895 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.384 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.317 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 152 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)




Re: Hadoop-Mapreduce-trunk - Build # 565 - Still Failing

Posted by Todd Lipcon <to...@cloudera.com>.
HDFS-1585 to fix

On Wed, Jan 19, 2011 at 5:04 AM, Apache Hudson Server <
hudson@hudson.apache.org> wrote:

> See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/565/
>
>
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE
> ###########################
> [...truncated 5085 lines...]
>
> compile-mapred-test:
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/classes
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/testjar
>    [mkdir] Created dir:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/testshell
>    [javac] Compiling 323 source files to
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/classes
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestNodeRefresh.java:96:
> cannot find symbol
>    [javac] symbol  : method
> startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,<nulltype>,java.lang.String[],<nulltype>)
>    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
>    [javac]       dfs.startDataNodes(conf, numHosts, true, null, null,
> hosts, null);
>    [javac]          ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestRecoveryManager.java:318:
> cannot find symbol
>    [javac] symbol  : method
> startDataNodes(org.apache.hadoop.mapred.JobConf,int,boolean,<nulltype>,<nulltype>,<nulltype>,<nulltype>)
>    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
>    [javac]     dfs.startDataNodes(conf, 1, true, null, null, null, null);
>    [javac]        ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:297:
> cannot find symbol
>    [javac] symbol  : method
> startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
>    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
>    [javac]       dfs.startDataNodes(conf, 1, true, null, rack2, hosts2,
> null);
>    [javac]          ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:335:
> cannot find symbol
>    [javac] symbol  : method
> startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
>    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
>    [javac]       dfs.startDataNodes(conf, 1, true, null, rack3, hosts3,
> null);
>    [javac]          ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:727:
> cannot find symbol
>    [javac] symbol  : method
> startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
>    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
>    [javac]       dfs.startDataNodes(conf, 1, true, null, rack2, hosts2,
> null);
>    [javac]          ^
>    [javac]
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:762:
> cannot find symbol
>    [javac] symbol  : method
> startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
>    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
>    [javac]       dfs.startDataNodes(conf, 1, true, null, rack3, hosts3,
> null);
>    [javac]          ^
>    [javac] Note: Some input files use or override a deprecated API.
>    [javac] Note: Recompile with -Xlint:deprecation for details.
>    [javac] 6 errors
>
> BUILD FAILED
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:533:
> The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:602:
> Compile failed; see the compiler error output for details.
>
> Total time: 2 minutes 59 seconds
> + RESULT=1
> + [ 1 != 0 ]
> + echo Build Failed: remaining tests not run
> Build Failed: remaining tests not run
> + exit 1
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Publishing Javadoc
> Archiving artifacts
> Recording test results
> Recording fingerprints
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> ###################################################################################
> ############################## FAILED TESTS (if any)
> ##############################
> No tests ran.
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Hadoop-Mapreduce-trunk - Build # 565 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/565/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5085 lines...]

compile-mapred-test:
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/classes
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/testjar
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/testshell
    [javac] Compiling 323 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/classes
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestNodeRefresh.java:96: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,<nulltype>,java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, numHosts, true, null, null, hosts, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestRecoveryManager.java:318: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.mapred.JobConf,int,boolean,<nulltype>,<nulltype>,<nulltype>,<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]     dfs.startDataNodes(conf, 1, true, null, null, null, null);
    [javac]        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:297: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack2, hosts2, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:335: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack3, hosts3, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:727: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack2, hosts2, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:762: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack3, hosts3, null);
    [javac]          ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 6 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:533: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:602: Compile failed; see the compiler error output for details.

Total time: 2 minutes 59 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 564 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/564/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5083 lines...]

compile-mapred-test:
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/classes
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/testjar
    [mkdir] Created dir: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/testshell
    [javac] Compiling 323 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/mapred/classes
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestNodeRefresh.java:96: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,<nulltype>,java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, numHosts, true, null, null, hosts, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapred/TestRecoveryManager.java:318: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.mapred.JobConf,int,boolean,<nulltype>,<nulltype>,<nulltype>,<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]     dfs.startDataNodes(conf, 1, true, null, null, null, null);
    [javac]        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:297: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack2, hosts2, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:335: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack3, hosts3, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:727: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack2, hosts2, null);
    [javac]          ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/test/mapred/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java:762: cannot find symbol
    [javac] symbol  : method startDataNodes(org.apache.hadoop.conf.Configuration,int,boolean,<nulltype>,java.lang.String[],java.lang.String[],<nulltype>)
    [javac] location: class org.apache.hadoop.hdfs.MiniDFSCluster
    [javac]       dfs.startDataNodes(conf, 1, true, null, rack3, hosts3, null);
    [javac]          ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 6 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:533: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:602: Compile failed; see the compiler error output for details.

Total time: 2 minutes 39 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 563 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/563/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 210603 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-17 16:12:03,687 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,688 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,688 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,689 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,689 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,689 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,690 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,690 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,690 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,691 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,691 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,691 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,691 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,692 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,692 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,693 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,693 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,694 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,694 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-17 16:12:03,694 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.965 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.356 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.304 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 185 minutes 51 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 562 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/562/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 206918 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-16 15:57:30,379 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,380 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,380 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,381 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,381 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,381 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,382 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,382 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,382 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,383 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,383 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,383 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,384 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,384 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,384 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,385 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,385 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,385 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,386 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-16 15:57:30,386 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.035 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.308 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.315 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 171 minutes 24 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 561 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/561/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 209978 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-15 15:41:35,789 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,790 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,790 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,791 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,791 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,791 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,792 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,792 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,792 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,793 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,793 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,794 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,794 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,794 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,795 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,795 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,795 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,796 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,796 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-15 15:41:35,796 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.952 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.299 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 155 minutes 29 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 560 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/560/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212443 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-14 16:02:14,274 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,275 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,275 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,275 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,276 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,276 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,276 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,277 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,277 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,277 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,278 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,278 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,278 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,279 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,279 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,280 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,280 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,280 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,281 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-14 16:02:14,281 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.986 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.3 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 175 minutes 59 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 559 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/559/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 209218 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-13 15:53:19,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,253 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,253 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,253 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-13 15:53:19,254 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.913 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.364 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.324 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 161 minutes 40 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Re: Hadoop-Mapreduce-trunk - Build # 558 - Still Failing

Posted by Todd Lipcon <to...@cloudera.com>.
On Thu, Jan 13, 2011 at 1:14 AM, Giridharan Kesavan
<gk...@yahoo-inc.com>wrote:

>
> log=${ivyresolvelog} is set to quite for ivy-retrieve-common and
> ivy-resolve target in the top level build.xml file while this is not done at
> the contrib level ie src/contrib/build-contrib.xml
> Hence no verbose output for toplevel mapred project.
>
> I can pretty much confirm that this build is run against
> hadoop-common-0.23.0-20110111.025140-27.jar<
> https://repository.apache.org/content/groups/snapshots-group/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110111.025140-27.jar>
> version of hadoop. ( I looked into the ivy cache).
> This is the latest available version of hadoop in the snapshot repository.
>

Then I think the Common build isn't publishing correctly.

The reason I ask is that the TestControlledMapReduceJob timeout should be
fixed by a recent change in Common. On my local box, if I do ant mvn-install
from common trunk, and then run this unit test using -Dresolvers=internal in
my mapred repo, it passes fine. If I drop resolvers=internal, and
clean-cache, it pulls the one from Apache and times out. It's pulling this:
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110111.025140-27.jar

-Todd
-- 
Todd Lipcon
Software Engineer, Cloudera

Re: Hadoop-Mapreduce-trunk - Build # 558 - Still Failing

Posted by Giridharan Kesavan <gk...@yahoo-inc.com>.
log=${ivyresolvelog} is set to quite for ivy-retrieve-common and ivy-resolve target in the top level build.xml file while this is not done at the contrib level ie src/contrib/build-contrib.xml
Hence no verbose output for toplevel mapred project.

I can pretty much confirm that this build is run against hadoop-common-0.23.0-20110111.025140-27.jar<https://repository.apache.org/content/groups/snapshots-group/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110111.025140-27.jar> version of hadoop. ( I looked into the ivy cache).
This is the latest available version of hadoop in the snapshot repository.

-Giri




On Jan 12, 2011, at 11:39 AM, Todd Lipcon wrote:

Is it possible that Hudson isn't properly pulling the latest common
snapshots out of nexus? The ivy-retrieve-common section of this log doesn't
indicate it downloaded anything.

-Todd

On Wed, Jan 12, 2011 at 8:00 AM, Apache Hudson Server <
hudson@hudson.apache.org<ma...@hudson.apache.org>> wrote:

See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/558/


###################################################################################
########################## LAST 60 LINES OF THE CONSOLE
###########################
[...truncated 208715 lines...]
  [junit]    0.85:96549
  [junit]    0.9:96658
  [junit]    0.95:96670
  [junit] Failed Reduce CDF --------
  [junit] 0: -9223372036854775808--9223372036854775807
  [junit] map attempts to success -- 0.6567164179104478,
0.3283582089552239, 0.014925373134328358,
  [junit] ===============
  [junit] 2011-01-12 15:59:42,121 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000025 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,122 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000028 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,122 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000029 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000030 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000031 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000032 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000033 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000034 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000035 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000036 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000037 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000038 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000039 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000040 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000041 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,127 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000042 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,127 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000043 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000044 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000045 has nulll TaskStatus
  [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob
(ZombieJob.java:sanitizeLoggedTask(318)) - Task
task_200904211745_0004_r_000046 has nulll TaskStatus
  [junit] generated failed map runtime distribution
  [junit] 100000: 18592--18592
  [junit]    0.1:18592
  [junit]    0.5:18592
  [junit]    0.9:18592
  [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.911 sec
  [junit] Running org.apache.hadoop.util.TestReflectionUtils
  [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.328 sec
  [junit] Running org.apache.hadoop.util.TestRunJar
  [junit] Creating
file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
  [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.268 sec

checkfailure:
  [touch] Creating
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817:
Tests failed!

Total time: 171 minutes 56 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure




###################################################################################
############################## FAILED TESTS (if any)
##############################
2 tests failed.
FAILED:
org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
      at
org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
      at
org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
      at
org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:
org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the
time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the
time in the report does not reflect the time until the timeout.






--
Todd Lipcon
Software Engineer, Cloudera


Re: Hadoop-Mapreduce-trunk - Build # 558 - Still Failing

Posted by Todd Lipcon <to...@cloudera.com>.
Is it possible that Hudson isn't properly pulling the latest common
snapshots out of nexus? The ivy-retrieve-common section of this log doesn't
indicate it downloaded anything.

-Todd

On Wed, Jan 12, 2011 at 8:00 AM, Apache Hudson Server <
hudson@hudson.apache.org> wrote:

> See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/558/
>
>
> ###################################################################################
> ########################## LAST 60 LINES OF THE CONSOLE
> ###########################
> [...truncated 208715 lines...]
>    [junit]    0.85:96549
>    [junit]    0.9:96658
>    [junit]    0.95:96670
>    [junit] Failed Reduce CDF --------
>    [junit] 0: -9223372036854775808--9223372036854775807
>    [junit] map attempts to success -- 0.6567164179104478,
> 0.3283582089552239, 0.014925373134328358,
>    [junit] ===============
>    [junit] 2011-01-12 15:59:42,121 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000025 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,122 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000028 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,122 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000029 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000030 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000031 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000032 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000033 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000034 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000035 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000036 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000037 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000038 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000039 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000040 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000041 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,127 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000042 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,127 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000043 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000044 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000045 has nulll TaskStatus
>    [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob
> (ZombieJob.java:sanitizeLoggedTask(318)) - Task
> task_200904211745_0004_r_000046 has nulll TaskStatus
>    [junit] generated failed map runtime distribution
>    [junit] 100000: 18592--18592
>    [junit]    0.1:18592
>    [junit]    0.5:18592
>    [junit]    0.9:18592
>    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.911 sec
>    [junit] Running org.apache.hadoop.util.TestReflectionUtils
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.328 sec
>    [junit] Running org.apache.hadoop.util.TestRunJar
>    [junit] Creating
> file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.268 sec
>
> checkfailure:
>    [touch] Creating
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed
>
> run-test-mapred-all-withtestcaseonly:
>
> run-test-mapred:
>
> BUILD FAILED
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817:
> Tests failed!
>
> Total time: 171 minutes 56 seconds
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Publishing Javadoc
> Archiving artifacts
> Recording test results
> Recording fingerprints
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
>
> ###################################################################################
> ############################## FAILED TESTS (if any)
> ##############################
> 2 tests failed.
> FAILED:
>  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup
>
> Error Message:
> null
>
> Stack Trace:
> java.lang.NullPointerException
>        at
> org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
>        at
> org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
>        at
> org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)
>
>
> FAILED:
>  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob
>
> Error Message:
> Timeout occurred. Please note the time in the report does not reflect the
> time until the timeout.
>
> Stack Trace:
> junit.framework.AssertionFailedError: Timeout occurred. Please note the
> time in the report does not reflect the time until the timeout.
>
>
>
>


-- 
Todd Lipcon
Software Engineer, Cloudera

Hadoop-Mapreduce-trunk - Build # 558 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/558/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 208715 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-12 15:59:42,121 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,122 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,122 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,123 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,124 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,125 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,126 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,127 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,127 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-12 15:59:42,128 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.911 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.328 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.268 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 171 minutes 56 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 557 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/557/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 210236 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-11 16:00:54,456 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,456 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,457 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,457 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,458 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,458 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,458 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,459 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,459 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,459 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,460 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,460 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,460 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,461 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,461 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,461 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,462 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,462 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,462 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-11 16:00:54,463 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.888 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.339 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.3 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 173 minutes 15 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 556 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/556/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2504 lines...]
clover:

ivy-download:
      [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
      [get] To: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivy-2.1.0.jar
      [get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivy-2.1.0.jar

BUILD FAILED
java.net.NoRouteToHostException: No route to host
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
	at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
	at java.net.Socket.connect(Socket.java:519)
	at java.net.Socket.connect(Socket.java:469)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
	at sun.net.www.http.HttpClient.New(HttpClient.java:306)
	at sun.net.www.http.HttpClient.New(HttpClient.java:323)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:778)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703)
	at org.apache.tools.ant.taskdefs.Get.doGet(Get.java:145)
	at org.apache.tools.ant.taskdefs.Get.execute(Get.java:78)
	at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
	at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
	at org.apache.tools.ant.Task.perform(Task.java:348)
	at org.apache.tools.ant.Target.execute(Target.java:357)
	at org.apache.tools.ant.Target.performTasks(Target.java:385)
	at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
	at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
	at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
	at org.apache.tools.ant.Project.executeTargets(Project.java:1189)
	at org.apache.tools.ant.Main.runBuild(Main.java:758)
	at org.apache.tools.ant.Main.startAnt(Main.java:217)
	at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
	at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)

Total time: 40 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 555 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/555/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 209209 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-09 15:48:18,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,263 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,263 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,263 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,264 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,264 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,265 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,265 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,265 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,266 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,266 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,266 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,267 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,267 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-09 15:48:18,267 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.923 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.36 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.318 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 162 minutes 22 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 554 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/554/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 209362 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-08 15:59:47,319 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,320 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,320 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,321 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,321 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,321 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,322 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,322 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,323 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,323 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,323 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,324 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,324 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,324 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,325 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,325 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,326 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,326 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,326 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-08 15:59:47,327 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.937 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.361 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.337 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 173 minutes 40 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.validateNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:300)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.__CLR3_0_274y7oleli(TestSetupTaskScheduling.java:332)
	at org.apache.hadoop.mapred.TestSetupTaskScheduling.testNumSlotsUsedForTaskCleanup(TestSetupTaskScheduling.java:314)


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 553 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/553/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 209307 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-07 16:07:04,848 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,849 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,849 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,849 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,850 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,850 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,850 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,851 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,851 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,851 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,852 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,852 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,852 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,853 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,853 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,853 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,854 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,854 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,854 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-07 16:07:04,855 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.949 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.361 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.317 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 180 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 552 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/552/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 207606 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-06 16:41:02,571 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,571 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,572 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,572 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,572 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,573 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,573 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,573 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,574 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,574 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,574 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,575 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,575 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,575 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,576 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,576 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,576 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,577 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,577 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-06 16:41:02,578 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.943 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.333 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.312 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 175 minutes 15 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 551 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/551/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 207064 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-05 15:53:19,223 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,224 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,224 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,224 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,225 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,225 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,225 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,226 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,226 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,226 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,227 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,227 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,228 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,228 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,228 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,229 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,229 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,229 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,230 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-05 15:53:19,230 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.92 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.335 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.325 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 171 minutes 46 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 550 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/550/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 207750 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-04 15:38:13,965 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,966 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,966 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,966 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,967 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,967 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,968 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,968 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,968 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,969 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,969 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,969 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,970 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,970 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,971 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,971 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,971 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,972 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,972 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-04 15:38:13,973 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.011 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.367 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.299 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:811: Tests failed!

Total time: 151 minutes 51 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 549 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/549/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
Started by timer
Building remotely on hadoop7
hudson.util.IOException2: remote file operation failed: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk at hudson.remoting.Channel@2545938c:hadoop7
	at hudson.FilePath.act(FilePath.java:749)
	at hudson.FilePath.act(FilePath.java:735)
	at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:589)
	at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:537)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1116)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:479)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:411)
	at hudson.model.Run.run(Run.java:1324)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:139)
Caused by: java.io.IOException: Unable to delete /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001/attempt_20101230131139886_0001_m_000000_0
	at hudson.Util.deleteFile(Util.java:261)
	at hudson.Util.deleteRecursive(Util.java:303)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:662)
	at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:596)
	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1899)
	at hudson.remoting.UserRequest.perform(UserRequest.java:114)
	at hudson.remoting.UserRequest.perform(UserRequest.java:48)
	at hudson.remoting.Request$2.run(Request.java:270)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:619)
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 548 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/548/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
Started by timer
Building remotely on hadoop7
hudson.util.IOException2: remote file operation failed: /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk at hudson.remoting.Channel@2545938c:hadoop7
	at hudson.FilePath.act(FilePath.java:749)
	at hudson.FilePath.act(FilePath.java:735)
	at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:589)
	at hudson.scm.SubversionSCM.checkout(SubversionSCM.java:537)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1116)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:479)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:411)
	at hudson.model.Run.run(Run.java:1324)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:139)
Caused by: java.io.IOException: Unable to delete /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/logs/userlogs/job_20101230131139886_0001/attempt_20101230131139886_0001_m_000000_0
	at hudson.Util.deleteFile(Util.java:261)
	at hudson.Util.deleteRecursive(Util.java:303)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.Util.deleteRecursive(Util.java:302)
	at hudson.Util.deleteContentsRecursive(Util.java:222)
	at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:662)
	at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:596)
	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1899)
	at hudson.remoting.UserRequest.perform(UserRequest.java:114)
	at hudson.remoting.UserRequest.perform(UserRequest.java:48)
	at hudson.remoting.Request$2.run(Request.java:270)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:619)
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.