You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/01/29 14:02:33 UTC

Hadoop-Mapreduce-trunk - Build # 575 - Still Failing

See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/575/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2584 lines...]
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/JobTracker.java:327: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:406: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.TaskTracker
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:403: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:64: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.IsolationRunner.FakeUmbilical
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:61: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:96: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:93: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:136: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner.Job
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:133: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 19 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:394: Compile failed; see the compiler error output for details.

Total time: 42 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 599 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/599/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4044 lines...]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;${jackson.version} by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;${jackson.version} by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [common]
[ivy:resolve] 	com.thoughtworks.paranamer#paranamer;${paranamer.version} by [com.thoughtworks.paranamer#paranamer;2.2] in [common]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|      common      |   42  |   2   |   0   |   8   ||   34  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#raid [sync]
[ivy:retrieve] 	confs: [common]
[ivy:retrieve] 	34 artifacts copied, 0 already retrieved (13238kB/49ms)
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivysettings.xml

compile:
     [echo] contrib: raid
    [javac] Compiling 32 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/classes
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:50: org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid is not abstract and does not override abstract method chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,boolean,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy
    [javac] public class BlockPlacementPolicyRaid extends BlockPlacementPolicy {
    [javac]        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:109: chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid cannot override chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy; overridden method is final
    [javac]   DatanodeDescriptor[] chooseTarget(String srcPath, int numOfReplicas,
    [javac]                        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:118: cannot find symbol
    [javac] symbol  : method chooseTarget(int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long)
    [javac] location: class org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyDefault
    [javac]         defaultPolicy.chooseTarget(numOfReplicas, writer,
    [javac]                      ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 3 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:432: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:193: Compile failed; see the compiler error output for details.

Total time: 2 minutes 12 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 598 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/598/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4076 lines...]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;${jackson.version} by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [common]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;${jackson.version} by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [common]
[ivy:resolve] 	com.thoughtworks.paranamer#paranamer;${paranamer.version} by [com.thoughtworks.paranamer#paranamer;2.2] in [common]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|      common      |   42  |   2   |   0   |   8   ||   34  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-common:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#raid [sync]
[ivy:retrieve] 	confs: [common]
[ivy:retrieve] 	34 artifacts copied, 0 already retrieved (13238kB/53ms)
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file = /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/ivy/ivysettings.xml

compile:
     [echo] contrib: raid
    [javac] Compiling 32 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/classes
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:50: org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid is not abstract and does not override abstract method chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,boolean,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy
    [javac] public class BlockPlacementPolicyRaid extends BlockPlacementPolicy {
    [javac]        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:109: chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyRaid cannot override chooseTarget(java.lang.String,int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long) in org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy; overridden method is final
    [javac]   DatanodeDescriptor[] chooseTarget(String srcPath, int numOfReplicas,
    [javac]                        ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java:118: cannot find symbol
    [javac] symbol  : method chooseTarget(int,org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor,java.util.List<org.apache.hadoop.hdfs.server.namenode.DatanodeDescriptor>,java.util.HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node>,long)
    [javac] location: class org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicyDefault
    [javac]         defaultPolicy.chooseTarget(numOfReplicas, writer,
    [javac]                      ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 3 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:432: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:30: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:193: Compile failed; see the compiler error output for details.

Total time: 2 minutes 26 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 597 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/597/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212900 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-16 15:57:06,739 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,739 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,740 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,740 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,740 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,741 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,741 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,741 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,742 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,742 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,742 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,743 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,743 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,743 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,744 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,744 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,744 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,745 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,745 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-16 15:57:06,746 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.008 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.34 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.294 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 169 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 596 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/596/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212799 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-15 16:16:22,448 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,449 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,449 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,450 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,450 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,450 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,451 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,451 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,451 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,452 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,452 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,452 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,453 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,453 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,453 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,454 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,454 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,454 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,455 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-15 16:16:22,455 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.003 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.369 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.293 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 188 minutes 41 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 595 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/595/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 211644 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-14 16:00:14,239 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,239 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,240 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,240 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,240 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,241 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,241 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,241 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,242 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,242 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,242 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,243 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,243 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,243 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,244 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,244 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,244 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,245 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,245 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-14 16:00:14,245 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.976 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.362 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.321 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 173 minutes 7 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 594 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/594/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 325849 lines...]
    [junit] 11/02/13 18:17:36 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/13 18:17:36 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/13 18:17:36 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping server on 42173
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 1 on 42173: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server listener on 42173
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/13 18:17:36 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:48199, storageID=DS-594520111-127.0.1.1-48199-1297621055411, infoPort=51980, ipcPort=42173):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 0 on 42173: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 2 on 42173: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/13 18:17:36 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:48199, storageID=DS-594520111-127.0.1.1-48199-1297621055411, infoPort=51980, ipcPort=42173):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping server on 42173
    [junit] 11/02/13 18:17:36 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/13 18:17:36 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/13 18:17:36 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/13 18:17:36 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/13 18:17:36 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/13 18:17:36 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/13 18:17:36 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 9 4 
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping server on 34454
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 0 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 1 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 2 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 5 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server listener on 34454
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 6 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 8 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 9 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 7 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 3 on 34454: exiting
    [junit] 11/02/13 18:17:36 INFO ipc.Server: IPC Server handler 4 on 34454: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.165 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 310 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 593 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/593/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 316357 lines...]
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/12 18:24:48 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/12 18:24:48 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping server on 60757
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 0 on 60757: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 2 on 60757: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server listener on 60757
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 1 on 60757: exiting
    [junit] 11/02/12 18:24:48 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:55804, storageID=DS-497933803-127.0.1.1-55804-1297535087140, infoPort=50236, ipcPort=60757):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/02/12 18:24:48 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/12 18:24:48 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/12 18:24:48 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:55804, storageID=DS-497933803-127.0.1.1-55804-1297535087140, infoPort=50236, ipcPort=60757):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping server on 60757
    [junit] 11/02/12 18:24:48 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/12 18:24:48 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/12 18:24:48 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/12 18:24:48 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/12 18:24:48 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 7 5 
    [junit] 11/02/12 18:24:48 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping server on 51760
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 0 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 3 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 1 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 2 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 4 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 5 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 8 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 7 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 6 on 51760: exiting
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/12 18:24:48 INFO ipc.Server: Stopping IPC Server listener on 51760
    [junit] 11/02/12 18:24:48 INFO ipc.Server: IPC Server handler 9 on 51760: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.184 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 316 minutes 33 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.streaming.TestMultipleCachefiles.testMultipleCachefiles

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 592 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/592/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 212778 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-11 15:55:10,255 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,255 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,256 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,256 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,256 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,257 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,257 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,258 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,258 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,258 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,259 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,259 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,259 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,260 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,260 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,261 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-11 15:55:10,262 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.912 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.332 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.316 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 167 minutes 25 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 591 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/591/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 318093 lines...]
    [junit] 11/02/10 18:10:39 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/10 18:10:39 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/10 18:10:39 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping server on 37712
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 1 on 37712: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 0 on 37712: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/10 18:10:39 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:33346, storageID=DS-1372034669-127.0.1.1-33346-1297361437854, infoPort=60239, ipcPort=37712):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server listener on 37712
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 2 on 37712: exiting
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/10 18:10:39 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:33346, storageID=DS-1372034669-127.0.1.1-33346-1297361437854, infoPort=60239, ipcPort=37712):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping server on 37712
    [junit] 11/02/10 18:10:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/10 18:10:39 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/10 18:10:39 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/10 18:10:39 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/10 18:10:39 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/10 18:10:39 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 5 5 
    [junit] 11/02/10 18:10:39 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping server on 49558
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 0 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 2 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 3 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 6 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server listener on 49558
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 9 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 7 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 1 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 5 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 8 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: IPC Server handler 4 on 49558: exiting
    [junit] 11/02/10 18:10:39 INFO ipc.Server: Stopping IPC Server Responder
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.468 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 300 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 590 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/590/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213363 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-10 10:43:35,840 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,840 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,841 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,841 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,841 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,842 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,842 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,842 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,843 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,843 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,843 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,844 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,844 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,845 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,845 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,845 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,846 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,846 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,846 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-10 10:43:35,847 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.979 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.32 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.308 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 170 minutes 1 second
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 589 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/589/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 319831 lines...]
    [junit] 11/02/09 18:20:01 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/09 18:20:01 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/09 18:20:01 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping server on 56258
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 0 on 56258: exiting
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 1 on 56258: exiting
    [junit] 11/02/09 18:20:02 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:60658, storageID=DS-1133570672-127.0.1.1-60658-1297275600631, infoPort=42252, ipcPort=56258):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 2 on 56258: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server listener on 56258
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/09 18:20:02 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:60658, storageID=DS-1133570672-127.0.1.1-60658-1297275600631, infoPort=42252, ipcPort=56258):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping server on 56258
    [junit] 11/02/09 18:20:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/09 18:20:02 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/09 18:20:02 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/09 18:20:02 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/09 18:20:02 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/09 18:20:02 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 8 6 
    [junit] 11/02/09 18:20:02 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping server on 40221
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 0 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 1 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server listener on 40221
    [junit] 11/02/09 18:20:02 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 2 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 3 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 5 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 4 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 6 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 9 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 7 on 40221: exiting
    [junit] 11/02/09 18:20:02 INFO ipc.Server: IPC Server handler 8 on 40221: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.608 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 312 minutes 36 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
37 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1369)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1448)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1549)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1575)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1613)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1674)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1698)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1766)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1844)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1923)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2115)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2189)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2243)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2295)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2343)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2511)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2559)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2597)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2638)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2671)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2782)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 588 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/588/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 319048 lines...]
    [junit] 11/02/08 18:21:08 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/08 18:21:08 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/08 18:21:08 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping server on 37595
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 0 on 37595: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 2 on 37595: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server listener on 37595
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 1 on 37595: exiting
    [junit] 11/02/08 18:21:08 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:51243, storageID=DS-1530136067-127.0.1.1-51243-1297189267407, infoPort=45444, ipcPort=37595):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:134)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/08 18:21:08 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:51243, storageID=DS-1530136067-127.0.1.1-51243-1297189267407, infoPort=45444, ipcPort=37595):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping server on 37595
    [junit] 11/02/08 18:21:08 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/02/08 18:21:08 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
    [junit] 11/02/08 18:21:08 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    [junit] 11/02/08 18:21:08 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
    [junit] 11/02/08 18:21:08 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/08 18:21:08 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 9 6 
    [junit] 11/02/08 18:21:08 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping server on 45944
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 0 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 2 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/02/08 18:21:08 INFO ipc.Server: Stopping IPC Server listener on 45944
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 1 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 5 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 3 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 4 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 8 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 9 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 7 on 45944: exiting
    [junit] 11/02/08 18:21:08 INFO ipc.Server: IPC Server handler 6 on 45944: exiting
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.553 sec

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/raid/build.xml:60: Tests failed!

Total time: 313 minutes 27 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
37 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1369)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1448)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1549)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1575)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1613)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1674)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1698)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1766)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1844)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1923)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2115)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2189)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2243)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2295)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2343)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2511)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2559)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2597)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2638)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2671)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2782)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorDeterministicReplay.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorEndToEnd.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorSerialJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestSimulatorStressJobSubmission.testMain

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 587 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/587/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213527 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-07 15:57:54,246 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,246 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,247 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,248 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,249 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,250 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,251 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-07 15:57:54,252 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.999 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.351 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.305 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 170 minutes 24 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 586 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/586/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 228582 lines...]
    [javac]     assertFalse(new Pair<Integer, Integer>(
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:54: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:74: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:80: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(Integer.valueOf(VAL_A), null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:92: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, null)
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:98: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(new Pair<Integer, Integer>(null, Integer.valueOf(VAL_A))
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:117: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(0 == new Pair<Integer, Integer>(Integer.valueOf(VAL_A),
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mrunit/src/test/org/apache/hadoop/mrunit/types/TestPair.java:125: cannot find symbol
    [javac] symbol  : method assertFalse(boolean)
    [javac] location: class org.apache.hadoop.mrunit.types.TestPair
    [javac]     assertFalse(0 == new Pair<Integer, Integer>(Integer.valueOf(VAL_A),
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 22 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:821: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:805: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:60: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 192 minutes 30 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
33 tests failed.
FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobs(TestFairScheduler.java:685)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple

Error Message:
expected:<2> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<2> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobsWithAssignMultiple(TestFairScheduler.java:744)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobs(TestFairScheduler.java:805)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithAssignMultiple(TestFairScheduler.java:913)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testJobsWithPriorities(TestFairScheduler.java:1027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithPools(TestFairScheduler.java:1100)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacity(TestFairScheduler.java:1173)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testLargeJobsWithExcessCapacityAndAssignMultiple(TestFairScheduler.java:1249)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testSmallJobInLargePool(TestFairScheduler.java:1327)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxJobs(TestFairScheduler.java:1369)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testUserMaxJobs(TestFairScheduler.java:1448)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits

Error Message:
expected:<0.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testComplexJobLimits(TestFairScheduler.java:1549)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.testSizeBasedWeight(TestFairScheduler.java:1575)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights

Error Message:
expected:<1.14> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.14> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeights(TestFairScheduler.java:1613)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps

Error Message:
expected:<1.33> but was:<0.0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1.33> but was:<0.0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolWeightsWhenNoMaps(TestFairScheduler.java:1674)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolMaxMapsReduces(TestFairScheduler.java:1698)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemption(TestFairScheduler.java:1766)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinSharePreemptionWithSmallJob(TestFairScheduler.java:1844)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemption(TestFairScheduler.java:1923)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionFromMultiplePools(TestFairScheduler.java:2027)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMinAndFairSharePreemption(TestFairScheduler.java:2115)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfDisabled(TestFairScheduler.java:2189)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testNoPreemptionIfOnlyLogging(TestFairScheduler.java:2243)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtNodeLevel(TestFairScheduler.java:2295)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingAtRackLevel(TestFairScheduler.java:2343)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack

Error Message:
expected:<0> but was:<6200>

Stack Trace:
junit.framework.AssertionFailedError: expected:<0> but was:<6200>
	at org.apache.hadoop.mapred.TestFairScheduler.testDelaySchedulingOffRack(TestFairScheduler.java:2444)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testAssignMultipleWithUnderloadedCluster(TestFairScheduler.java:2511)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoPool

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2559)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testMultipleFifoPools(TestFairScheduler.java:2597)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFifoAndFairPools(TestFairScheduler.java:2638)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at org.apache.hadoop.mapred.TestFairScheduler.testPoolAssignment(TestFairScheduler.java:2671)


FAILED:  org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
	at org.apache.hadoop.mapred.TestFairScheduler.checkAssignment(TestFairScheduler.java:2743)
	at org.apache.hadoop.mapred.TestFairScheduler.testFairSharePreemptionWithShortTimeout(TestFairScheduler.java:2782)


FAILED:  org.apache.hadoop.mapred.TestFairSchedulerSystem.testFairSchedulerSystem

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 585 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/585/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 214219 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-05 15:42:31,035 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,036 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,036 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,037 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,037 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,038 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,039 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,039 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,039 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,040 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,040 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,041 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,041 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,041 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,042 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,042 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,042 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,043 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,043 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-05 15:42:31,044 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.045 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.327 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 154 minutes 55 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 584 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/584/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213816 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-04 15:55:08,077 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,078 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,078 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,078 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,079 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,079 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,079 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,080 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,080 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,080 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,081 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,081 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,081 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,082 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,082 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,082 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,083 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,083 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,083 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-04 15:55:08,084 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.98 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.303 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.307 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 167 minutes 29 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 583 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/583/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213623 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-04 00:07:40,002 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,002 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,002 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,003 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,003 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,003 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,004 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,004 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,005 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,005 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,005 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,006 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,006 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,006 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,007 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,007 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,007 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,008 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,008 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-04 00:07:40,008 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.916 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.322 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.317 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 162 minutes 21 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.




Hadoop-Mapreduce-trunk - Build # 582 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/582/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 213981 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-03 15:44:28,463 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,463 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,464 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,464 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,464 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,465 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,465 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,465 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,466 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,466 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,466 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,467 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,467 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,467 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,468 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,468 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,469 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,469 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,469 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-03 15:44:28,470 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.955 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.35 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.318 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 156 minutes 43 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:241)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:422)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:368)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:333)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:461)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:442)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken$1.run(TestUmbilicalProtocolWithJobToken.java:102)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1142)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.__CLR3_0_2ky5ls2wkg(TestUmbilicalProtocolWithJobToken.java:97)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc(TestUmbilicalProtocolWithJobToken.java:75)




Hadoop-Mapreduce-trunk - Build # 581 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/581/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 215114 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-02-02 15:50:21,956 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,957 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,958 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,959 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,960 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,961 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,961 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,961 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,962 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,962 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,962 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-02-02 15:50:21,963 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 3.097 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.315 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.295 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:817: Tests failed!

Total time: 162 minutes 8 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


REGRESSION:  org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc

Error Message:
null

Stack Trace:
java.lang.NullPointerException
	at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:241)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:422)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:368)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:333)
	at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:461)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:442)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken$1.run(TestUmbilicalProtocolWithJobToken.java:102)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1142)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.__CLR3_0_2ky5ls2wkg(TestUmbilicalProtocolWithJobToken.java:97)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.testJobTokenRpc(TestUmbilicalProtocolWithJobToken.java:75)




Hadoop-Mapreduce-trunk - Build # 580 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/580/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6304 lines...]
[ivy:resolve] 	found org.aspectj#aspectjrt;1.6.5 in maven2
[ivy:resolve] 	found org.aspectj#aspectjtools;1.6.5 in maven2
[ivy:resolve] 	found org.apache.hadoop#hadoop-hdfs-test;0.23.0-SNAPSHOT in apache-snapshot
[ivy:resolve] :: resolution report :: resolve 496ms :: artifacts dl 15ms
[ivy:resolve] 	:: evicted modules:
[ivy:resolve] 	commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M4 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftplet-api;1.0.0-M2 by [org.apache.ftpserver#ftplet-api;1.0.0] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftpserver-core;1.0.0-M2 by [org.apache.ftpserver#ftpserver-core;1.0.0] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M2 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;1.0.1 by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;1.0.1 by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [test]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [test]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|       test       |   54  |   4   |   0   |   12  ||   42  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-test:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#mumak [sync]
[ivy:retrieve] 	confs: [test]
[ivy:retrieve] 	42 artifacts copied, 0 already retrieved (24336kB/93ms)

compile-test:
     [echo] contrib: mumak
    [javac] Compiling 15 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/mumak/test
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java:56: org.apache.hadoop.mapred.MockSimulatorJobTracker is not abstract and does not override abstract method getProtocolSignature(java.lang.String,long,int) in org.apache.hadoop.ipc.VersionedProtocol
    [javac] public class MockSimulatorJobTracker implements InterTrackerProtocol,
    [javac]        ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 1 error

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1149: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 3 minutes 9 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 579 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/579/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6328 lines...]
[ivy:resolve] 	found org.aspectj#aspectjrt;1.6.5 in maven2
[ivy:resolve] 	found org.aspectj#aspectjtools;1.6.5 in maven2
[ivy:resolve] 	found org.apache.hadoop#hadoop-hdfs-test;0.23.0-SNAPSHOT in apache-snapshot
[ivy:resolve] :: resolution report :: resolve 384ms :: artifacts dl 16ms
[ivy:resolve] 	:: evicted modules:
[ivy:resolve] 	commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M4 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftplet-api;1.0.0-M2 by [org.apache.ftpserver#ftplet-api;1.0.0] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftpserver-core;1.0.0-M2 by [org.apache.ftpserver#ftpserver-core;1.0.0] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M2 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;1.0.1 by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;1.0.1 by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [test]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [test]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|       test       |   54  |   4   |   0   |   12  ||   42  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-test:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#mumak [sync]
[ivy:retrieve] 	confs: [test]
[ivy:retrieve] 	42 artifacts copied, 0 already retrieved (24335kB/86ms)

compile-test:
     [echo] contrib: mumak
    [javac] Compiling 15 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/mumak/test
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java:56: org.apache.hadoop.mapred.MockSimulatorJobTracker is not abstract and does not override abstract method getProtocolSignature(java.lang.String,long,int) in org.apache.hadoop.ipc.VersionedProtocol
    [javac] public class MockSimulatorJobTracker implements InterTrackerProtocol,
    [javac]        ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 1 error

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1149: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 3 minutes 37 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 578 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/578/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6347 lines...]
[ivy:resolve] 	found org.aspectj#aspectjrt;1.6.5 in maven2
[ivy:resolve] 	found org.aspectj#aspectjtools;1.6.5 in maven2
[ivy:resolve] 	found org.apache.hadoop#hadoop-hdfs-test;0.23.0-SNAPSHOT in apache-snapshot
[ivy:resolve] :: resolution report :: resolve 414ms :: artifacts dl 12ms
[ivy:resolve] 	:: evicted modules:
[ivy:resolve] 	commons-logging#commons-logging;1.0.4 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.2 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.1.1] in [test]
[ivy:resolve] 	commons-codec#commons-codec;1.3 by [commons-codec#commons-codec;1.4] in [test]
[ivy:resolve] 	org.slf4j#slf4j-api;1.5.2 by [org.slf4j#slf4j-api;1.5.11] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M4 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftplet-api;1.0.0-M2 by [org.apache.ftpserver#ftplet-api;1.0.0] in [test]
[ivy:resolve] 	org.apache.ftpserver#ftpserver-core;1.0.0-M2 by [org.apache.ftpserver#ftpserver-core;1.0.0] in [test]
[ivy:resolve] 	org.apache.mina#mina-core;2.0.0-M2 by [org.apache.mina#mina-core;2.0.0-M5] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-mapper-asl;1.0.1 by [org.codehaus.jackson#jackson-mapper-asl;1.4.2] in [test]
[ivy:resolve] 	org.codehaus.jackson#jackson-core-asl;1.0.1 by [org.codehaus.jackson#jackson-core-asl;1.4.2] in [test]
[ivy:resolve] 	commons-codec#commons-codec;${commons-codec.version} by [commons-codec#commons-codec;1.4] in [test]
	---------------------------------------------------------------------
	|                  |            modules            ||   artifacts   |
	|       conf       | number| search|dwnlded|evicted|| number|dwnlded|
	---------------------------------------------------------------------
	|       test       |   54  |   4   |   0   |   12  ||   42  |   0   |
	---------------------------------------------------------------------

ivy-retrieve-test:
[ivy:retrieve] :: retrieving :: org.apache.hadoop#mumak [sync]
[ivy:retrieve] 	confs: [test]
[ivy:retrieve] 	42 artifacts copied, 0 already retrieved (24335kB/83ms)

compile-test:
     [echo] contrib: mumak
    [javac] Compiling 15 source files to /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build/contrib/mumak/test
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/mumak/src/test/org/apache/hadoop/mapred/MockSimulatorJobTracker.java:56: org.apache.hadoop.mapred.MockSimulatorJobTracker is not abstract and does not override abstract method getProtocolSignature(java.lang.String,long,int) in org.apache.hadoop.ipc.VersionedProtocol
    [javac] public class MockSimulatorJobTracker implements InterTrackerProtocol,
    [javac]        ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 1 error

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:1149: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build.xml:39: The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/contrib/build-contrib.xml:229: Compile failed; see the compiler error output for details.

Total time: 3 minutes 36 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 577 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/577/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2585 lines...]
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/JobTracker.java:327: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:406: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.TaskTracker
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:403: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:64: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.IsolationRunner.FakeUmbilical
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:61: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:96: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:93: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:136: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner.Job
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:133: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 19 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:394: Compile failed; see the compiler error output for details.

Total time: 42 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-trunk - Build # 576 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk/576/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2585 lines...]
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/JobTracker.java:327: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:406: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.TaskTracker
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/TaskTracker.java:403: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:64: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.IsolationRunner.FakeUmbilical
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/IsolationRunner.java:61: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:96: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner
    [javac]     return ProtocolSignature.getProtocolSigature(
    [javac]            ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:93: method does not override or implement a method from a supertype
    [javac]   @Override
    [javac]   ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:136: cannot find symbol
    [javac] symbol  : variable ProtocolSignature
    [javac] location: class org.apache.hadoop.mapred.LocalJobRunner.Job
    [javac]       return ProtocolSignature.getProtocolSigature(
    [javac]              ^
    [javac] /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/src/java/org/apache/hadoop/mapred/LocalJobRunner.java:133: method does not override or implement a method from a supertype
    [javac]     @Override
    [javac]     ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] 19 errors

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-trunk/trunk/build.xml:394: Compile failed; see the compiler error output for details.

Total time: 42 seconds
+ RESULT=1
+ [ 1 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 1
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.